• Random article
  • Teaching guide
  • Privacy & cookies

essay on virtual reality technology

Virtual reality

by Chris Woodford . Last updated: August 14, 2023.

Y ou'll probably never go to Mars, swim with dolphins, run an Olympic 100 meters, or sing onstage with the Rolling Stones. But if virtual reality ever lives up to its promise, you might be able to do all these things—and many more—without even leaving your home. Unlike real reality (the actual world in which we live), virtual reality means simulating bits of our world (or completely imaginary worlds) using high-performance computers and sensory equipment, like headsets and gloves. Apart from games and entertainment, it's long been used for training airline pilots and surgeons and for helping scientists to figure out complex problems such as the structure of protein molecules. How does it work? Let's take a closer look! Photo: Virtual pilot. This US Air Force student is learning to fly a giant C-17 Globemaster plane using a virtual reality simulator. Picture by Trenton Jancze courtesy of US Air Force .

A believable, interactive 3D computer-created world that you can explore so you feel you really are there, both mentally and physically.

essay on virtual reality technology

Photo: The view from inside. A typical HMD has two tiny screens that show different pictures to each of your eyes, so your brain produces a combined 3D (stereoscopic) image. Picture by courtesy of US Air Force.

Photos: EXOS datagloves produced by NASA in the 1990s had very intricate external sensors to detect finger movements with high precision. Picture courtesy of NASA Ames Research Center and Internet Archive .

Photo: This more elaborate EXOS glove had separate sensors on each finger segment, wired up to a single ribbon cable connected up to the main VR computer. Picture by Wade Sisler courtesy of NASA Ames Research Center .

Artwork: How a fiber-optic dataglove works. Each finger has a fiber-optic cable stretched along its length. (1) At one end of the finger, a light-emitting diode (LED) shines light into the cable. (2) Light rays shoot down the cable, bouncing off the sides. (3) There are tiny abrasions in the top of each fiber through which some of the rays escape. The more you flex your fingers, the more light escapes. (4) The amount of light arriving at a photocell at the end gives a rough indication of how much you're flexing your finger. (5) A cable carries this signal off to the VR computer. This is a simplified version of the kind of dataglove VPL patented in 1992, and you'll find the idea described in much more detail in US Patent 5,097,252 .

Photo: A typical handheld virtual reality controller (complete with elastic bands), looking not so different from a video game controller. Photo courtesy of NASA Ames Research Center and Internet Archive .

If you liked this article...

Find out more, on this website.

  • 3D-television
  • Augmented reality
  • Computer graphics

News and popular science

  • Apple Is Stepping Into the Metaverse. Will Anyone Care? by Kellen Browning and Mike Isaac. The New York Times, June 2, 2023. Can Apple succeed with the Metaverse where Facebook has (so far) failed?
  • Everybody Into the Metaverse! Virtual Reality Beckons Big Tech by Cade Metz. The New York Times, December 30, 2021. The Times welcomes the latest push to an ambitious new vision of the virtual world.
  • Facebook gives a glimpse of metaverse, its planned virtual reality world by Mike Isaac. The Guardian, October 29, 2021. Facebook rebrands itself "Meta" as it announces ambitious plans to build a virtual metaverse.
  • Military trials training for missions in virtual reality by Zoe Kleinman. BBC News, 1 March 2020. How Oculus Rift and Unreal Engine software are being deployed in military training.
  • What went wrong with virtual reality? by Eleanor Lawrie. BBC News, 10 January 2020. Despite all the hype, VR still isn't a mainstream technology.
  • FedEx Ground Uses Virtual Reality to Train and Retain Package Handlers by Michelle Rafter. IEEE Spectrum, 8 November 2019. How VR could help reduce staff turnover by weeding out unsuitable people before they start work.
  • VR Therapy Makes Arachnophobes Braver Around Real Spiders by Emily Waltz. IEEE Spectrum, 24 January 2019. Can VR cure your fear of spiders?
  • Touching the Virtual: How Microsoft Research is Making Virtual Reality Tangible : Microsoft Blog, 8 March 2018. A fascinating look at Microsoft's research into haptic (touch-based) VR controllers.
  • Want to Know What Virtual Reality Might Become? Look to the Past by Steven Johnson. The New York Times, November 3, 2016. What can the history of 19th-century stereoscopic toys tell us about the likely future of VR?
  • A Virtual Reality Revolution, Coming to a Headset Near You by Lorne Manly. The New York Times, November 19, 2015. Musicians, filmmakers, and games programmers try to second-guess the future of VR.
  • Virtual Reality Pioneer Looks Beyond Entertainment by Jeremy Hsu. IEEE Spectrum, April 30, 2015. Where does Stanford VR guru Jeremy Bailenson see VR going in the future?
  • Whatever happened to ... Virtual Reality? by Science@NASA, June 21, 2004. Why NASA decided to revisit virtual reality 20 years after the technology first drew attention in the 1980s.
  • Virtual Reality: Oxymoron or Pleonasm? by Nicholas Negroponte, Wired, Issue 1.06, December 1993. Early thoughts on virtual worlds from the influential MIT Media Lab pioneer

Scholarly articles

  • The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature by Pietro Cipresso et al, Front Psychol. 2018; 9: 2086.
  • Virtual Reality as a Tool for Scientific Research by Jeremy Swan, NICHD Newsletter, September 2016.
  • Virtual Heritage: Researching and Visualizing the Past in 3D by Donald H. Sanders, Journal of Eastern Mediterranean Archaeology & Heritage Studies, Vol. 2, No. 1 (2014), pp. 30–47.

For older readers

  • Virtual Reality by Samuel Greengard. MIT Press, 2019. A short introduction that explains why VR and AR matter, looks at the different technologies available, considers social issues that they raise, and explores the likely shape of our virtual future.
  • Virtual Reality Technology by Grigore Burdea and Philippe Coiffet. Wiley-IEEE, 2017/2024. Popular VR textbook covering history, programming, and applications.
  • Learning Virtual Reality: Developing Immersive Experiences and Applications for Desktop, Web, and Mobile by Tony Parisi. O'Reilly, 2015. An up-to-date introduction for VR developers that covers everything from the basics of VR to cutting-edge products like the Oculus Rift and Google Cardboard.
  • Developing Virtual Reality Applications by Alan B. Craig, William R. Sherman, and Jeffrey D. Will. Morgan Kaufmann, 2009. More detail of the applications of VR in science, education, medicine, the military, and elsewhere.
  • Virtual Reality by Howard Rheingold. Secker & Warburg, 1991. The classic (though now somewhat dated) introduction to VR.

For younger readers

  • All About Virtual Reality by Jack Challoner. DK, 2017. A 32-page introduction for ages 7–9.

Current research

  • Advanced VR Research Centre, Loughborough University
  • Virtual Reality and Visualization Research: Bauhaus-Universität Weimar
  • Institute of Software Technology and Interactive Systems: Vienna University of Technology
  • Microsoft Research: Human-Computer Interaction
  • MIT Media Lab
  • Virtual Human Interaction Lab (VHIL) at Stanford University
  • WO 1992009963: System for creating a virtual world by Dan D Browning, Ethan D Joffe, Jaron Z Lanier, VPL Research, Inc., published June 11, 1992. Outlines a method of creating and editing a virtual world using a pictorial database.
  • US Patent 5,798,739: Virtual image display device by Michael A. Teitel, VPL Research, Inc., published August 25, 1998. A typical head-mounted display designed for VR systems.
  • US Patent 5,798,739: Motion sensor which produces an asymmetrical signal in response to symmetrical movement by Young L. Harvill et al, VPL Research, Inc., published March 17, 1992. Describes a dataglove that users fiber-optic sensors to detect finger movements.

Text copyright © Chris Woodford 2007, 2023. All rights reserved. Full copyright notice and terms of use .

Rate this page

Tell your friends, cite this page, more to explore on our website....

  • Get the book
  • Send feedback

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 25 October 2021

Augmented reality and virtual reality displays: emerging technologies and future perspectives

  • Jianghao Xiong 1 ,
  • En-Lin Hsiang 1 ,
  • Ziqian He 1 ,
  • Tao Zhan   ORCID: orcid.org/0000-0001-5511-6666 1 &
  • Shin-Tson Wu   ORCID: orcid.org/0000-0002-0943-0440 1  

Light: Science & Applications volume  10 , Article number:  216 ( 2021 ) Cite this article

114k Accesses

438 Citations

36 Altmetric

Metrics details

  • Liquid crystals

With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital interactions. Nonetheless, to simultaneously match the exceptional performance of human vision and keep the near-eye display module compact and lightweight imposes unprecedented challenges on optical engineering. Fortunately, recent progress in holographic optical elements (HOEs) and lithography-enabled devices provide innovative ways to tackle these obstacles in AR and VR that are otherwise difficult with traditional optics. In this review, we begin with introducing the basic structures of AR and VR headsets, and then describing the operation principles of various HOEs and lithography-enabled devices. Their properties are analyzed in detail, including strong selectivity on wavelength and incident angle, and multiplexing ability of volume HOEs, polarization dependency and active switching of liquid crystal HOEs, device fabrication, and properties of micro-LEDs (light-emitting diodes), and large design freedoms of metasurfaces. Afterwards, we discuss how these devices help enhance the AR and VR performance, with detailed description and analysis of some state-of-the-art architectures. Finally, we cast a perspective on potential developments and research directions of these photonic devices for future AR and VR displays.

Similar content being viewed by others

essay on virtual reality technology

Advanced liquid crystal devices for augmented reality and virtual reality displays: principles and applications

essay on virtual reality technology

Achromatic diffractive liquid-crystal optics for virtual reality displays

essay on virtual reality technology

Metasurface wavefront control for high-performance user-natural augmented reality waveguide glasses

Introduction.

Recent advances in high-speed communication and miniature mobile computing platforms have escalated a strong demand for deeper human-digital interactions beyond traditional flat panel displays. Augmented reality (AR) and virtual reality (VR) headsets 1 , 2 are emerging as next-generation interactive displays with the ability to provide vivid three-dimensional (3D) visual experiences. Their useful applications include education, healthcare, engineering, and gaming, just to name a few 3 , 4 , 5 . VR embraces a total immersive experience, while AR promotes the interaction between user, digital contents, and real world, therefore displaying virtual images while remaining see-through capability. In terms of display performance, AR and VR face several common challenges to satisfy demanding human vision requirements, including field of view (FoV), eyebox, angular resolution, dynamic range, and correct depth cue, etc. Another pressing demand, although not directly related to optical performance, is ergonomics. To provide a user-friendly wearing experience, AR and VR should be lightweight and ideally have a compact, glasses-like form factor. The above-mentioned requirements, nonetheless, often entail several tradeoff relations with one another, which makes the design of high-performance AR/VR glasses/headsets particularly challenging.

In the 1990s, AR/VR experienced the first boom, which quickly subsided due to the lack of eligible hardware and digital content 6 . Over the past decade, the concept of immersive displays was revisited and received a new round of excitement. Emerging technologies like holography and lithography have greatly reshaped the AR/VR display systems. In this article, we firstly review the basic requirements of AR/VR displays and their associated challenges. Then, we briefly describe the properties of two emerging technologies: holographic optical elements (HOEs) and lithography-based devices (Fig. 1 ). Next, we separately introduce VR and AR systems because of their different device structures and requirements. For the immersive VR system, the major challenges and how these emerging technologies help mitigate the problems will be discussed. For the see-through AR system, we firstly review the present status of light engines and introduce some architectures for the optical combiners. Performance summaries on microdisplay light engines and optical combiners will be provided, that serve as a comprehensive overview of the current AR display systems.

figure 1

The left side illustrates HOEs and lithography-based devices. The right side shows the challenges in VR and architectures in AR, and how the emerging technologies can be applied

Key parameters of AR and VR displays

AR and VR displays face several common challenges to satisfy the demanding human vision requirements, such as FoV, eyebox, angular resolution, dynamic range, and correct depth cue, etc. These requirements often exhibit tradeoffs with one another. Before diving into detailed relations, it is beneficial to review the basic definitions of the above-mentioned display parameters.

Definition of parameters

Taking a VR system (Fig. 2a ) as an example. The light emitting from the display module is projected to a FoV, which can be translated to the size of the image perceived by the viewer. For reference, human vision’s horizontal FoV can be as large as 160° for monocular vision and 120° for overlapped binocular vision 6 . The intersection area of ray bundles forms the exit pupil, which is usually correlated with another parameter called eyebox. The eyebox defines the region within which the whole image FoV can be viewed without vignetting. It therefore generally manifests a 3D geometry 7 , whose volume is strongly dependent on the exit pupil size. A larger eyebox offers more tolerance to accommodate the user’s diversified interpupillary distance (IPD) and wiggling of headset when in use. Angular resolution is defined by dividing the total resolution of the display panel by FoV, which measures the sharpness of a perceived image. For reference, a human visual acuity of 20/20 amounts to 1 arcmin angular resolution, or 60 pixels per degree (PPD), which is considered as a common goal for AR and VR displays. Another important feature of a 3D display is depth cue. Depth cue can be induced by displaying two separate images to the left eye and the right eye, which forms the vergence cue. But the fixed depth of the displayed image often mismatches with the actual depth of the intended 3D image, which leads to incorrect accommodation cues. This mismatch causes the so-called vergence-accommodation conflict (VAC), which will be discussed in detail later. One important observation is that the VAC issue may be more serious in AR than VR, because the image in an AR display is directly superimposed onto the real-world with correct depth cues. The image contrast is dependent on the display panel and stray light. To achieve a high dynamic range, the display panel should exhibit high brightness, low dark level, and more than 10-bits of gray levels. Nowadays, the display brightness of a typical VR headset is about 150–200 cd/m 2 (or nits).

figure 2

a Schematic of a VR display defining FoV, exit pupil, eyebox, angular resolution, and accommodation cue mismatch. b Sketch of an AR display illustrating ACR

Figure 2b depicts a generic structure of an AR display. The definition of above parameters remains the same. One major difference is the influence of ambient light on the image contrast. For a see-through AR display, ambient contrast ratio (ACR) 8 is commonly used to quantify the image contrast:

where L on ( L off ) represents the on (off)-state luminance (unit: nit), L am is the ambient luminance, and T is the see-through transmittance. In general, ambient light is measured in illuminance (lux). For the convenience of comparison, we convert illuminance to luminance by dividing a factor of π, assuming the emission profile is Lambertian. In a normal living room, the illuminance is about 100 lux (i.e., L am  ≈ 30 nits), while in a typical office lighting condition, L am  ≈ 150 nits. For outdoors, on an overcast day, L am  ≈ 300 nits, and L am  ≈ 3000 nits on a sunny day. For AR displays, a minimum ACR should be 3:1 for recognizable images, 5:1 for adequate readability, and ≥10:1 for outstanding readability. To make a simple estimate without considering all the optical losses, to achieve ACR = 10:1 in a sunny day (~3000 nits), the display needs to deliver a brightness of at least 30,000 nits. This imposes big challenges in finding a high brightness microdisplay and designing a low loss optical combiner.

Tradeoffs and potential solutions

Next, let us briefly review the tradeoff relations mentioned earlier. To begin with, a larger FoV leads to a lower angular resolution for a given display resolution. In theory, to overcome this tradeoff only requires a high-resolution-display source, along with high-quality optics to support the corresponding modulation transfer function (MTF). To attain 60 PPD across 100° FoV requires a 6K resolution for each eye. This may be realizable in VR headsets because a large display panel, say 2–3 inches, can still accommodate a high resolution with acceptable manufacture cost. However, for a glasses-like wearable AR display, the conflict between small display size and the high solution becomes obvious as further shrinking the pixel size of a microdisplay is challenging.

To circumvent this issue, the concept of the foveated display is proposed 9 , 10 , 11 , 12 , 13 . The idea is based on that the human eye only has high visual acuity in the central fovea region, which accounts for about 10° FoV. If the high-resolution image is only projected to fovea while the peripheral image remains low resolution, then a microdisplay with 2K resolution can satisfy the need. Regarding the implementation method of foveated display, a straightforward way is to optically combine two display sources 9 , 10 , 11 : one for foveal and one for peripheral FoV. This approach can be regarded as spatial multiplexing of displays. Alternatively, time-multiplexing can also be adopted, by temporally changing the optical path to produce different magnification factors for the corresponding FoV 12 . Finally, another approach without multiplexing is to use a specially designed lens with intended distortion to achieve non-uniform resolution density 13 . Aside from the implementation of foveation, another great challenge is to dynamically steer the foveated region as the viewer’s eye moves. This task is strongly related to pupil steering, which will be discussed in detail later.

A larger eyebox or FoV usually decreases the image brightness, which often lowers the ACR. This is exactly the case for a waveguide AR system with exit pupil expansion (EPE) while operating under a strong ambient light. To improve ACR, one approach is to dynamically adjust the transmittance with a tunable dimmer 14 , 15 . Another solution is to directly boost the image brightness with a high luminance microdisplay and an efficient combiner optics. Details of this topic will be discussed in the light engine section.

Another tradeoff of FoV and eyebox in geometric optical systems results from the conservation of etendue (or optical invariant). To increase the system etendue requires a larger optics, which in turn compromises the form factor. Finally, to address the VAC issue, the display system needs to generate a proper accommodation cue, which often requires the modulation of image depth or wavefront, neither of which can be easily achieved in a traditional geometric optical system. While remarkable progresses have been made to adopt freeform surfaces 16 , 17 , 18 , to further advance AR and VR systems requires additional novel optics with a higher degree of freedom in structure design and light modulation. Moreover, the employed optics should be thin and lightweight. To mitigate the above-mentioned challenges, diffractive optics is a strong contender. Unlike geometric optics relying on curved surfaces to refract or reflect light, diffractive optics only requires a thin layer of several micrometers to establish efficient light diffractions. Two major types of diffractive optics are HOEs based on wavefront recording and manually written devices like surface relief gratings (SRGs) based on lithography. While SRGs have large design freedoms of local grating geometry, a recent publication 19 indicates the combination of HOE and freeform optics can also offer a great potential for arbitrary wavefront generation. Furthermore, the advances in lithography have also enabled optical metasurfaces beyond diffractive and refractive optics, and miniature display panels like micro-LED (light-emitting diode). These devices hold the potential to boost the performance of current AR/VR displays, while keeping a lightweight and compact form factor.

Formation and properties of HOEs

HOE generally refers to a recorded hologram that reproduces the original light wavefront. The concept of holography is proposed by Dennis Gabor 20 , which refers to the process of recording a wavefront in a medium (hologram) and later reconstructing it with a reference beam. Early holography uses intensity-sensitive recording materials like silver halide emulsion, dichromated gelatin, and photopolymer 21 . Among them, photopolymer stands out due to its easy fabrication and ability to capture high-fidelity patterns 22 , 23 . It has therefore found extensive applications like holographic data storage 23 and display 24 , 25 . Photopolymer HOEs (PPHOEs) have a relatively small refractive index modulation and therefore exhibits a strong selectivity on the wavelength and incident angle. Another feature of PPHOE is that several holograms can be recorded into a photopolymer film by consecutive exposures. Later, liquid-crystal holographic optical elements (LCHOEs) based on photoalignment polarization holography have also been developed 25 , 26 . Due to the inherent anisotropic property of liquid crystals, LCHOEs are extremely sensitive to the polarization state of the input light. This feature, combined with the polarization modulation ability of liquid crystal devices, offers a new possibility for dynamic wavefront modulation in display systems.

The formation of PPHOE is illustrated in Fig. 3a . When exposed to an interfering field with high-and-low intensity fringes, monomers tend to move toward bright fringes due to the higher local monomer-consumption rate. As a result, the density and refractive index is slightly larger in bright regions. Note the index modulation δ n here is defined as the difference between the maximum and minimum refractive indices, which may be twice the value in other definitions 27 . The index modulation δ n is typically in the range of 0–0.06. To understand the optical properties of PPHOE, we simulate a transmissive grating and a reflective grating using rigorous coupled-wave analysis (RCWA) 28 , 29 and plot the results in Fig. 3b . Details of grating configuration can be found in Table S1 . Here, the reason for only simulating gratings is that for a general HOE, the local region can be treated as a grating. The observation of gratings can therefore offer a general insight of HOEs. For a transmissive grating, its angular bandwidth (efficiency > 80%) is around 5° ( λ  = 550 nm), while the spectral band is relatively broad, with bandwidth around 175 nm (7° incidence). For a reflective grating, its spectral band is narrow, with bandwidth around 10 nm. The angular bandwidth varies with the wavelength, ranging from 2° to 20°. The strong selectivity of PPHOE on wavelength and incident angle is directly related to its small δ n , which can be adjusted by controlling the exposure dosage.

figure 3

a Schematic of the formation of PPHOE. Simulated efficiency plots for b1 transmissive and b2 reflective PPHOEs. c Working principle of multiplexed PPHOE. d Formation and molecular configurations of LCHOEs. Simulated efficiency plots for e1 transmissive and e2 reflective LCHOEs. f Illustration of polarization dependency of LCHOEs

A distinctive feature of PPHOE is the ability to multiplex several holograms into one film sample. If the exposure dosage of a recording process is controlled so that the monomers are not completely depleted in the first exposure, the remaining monomers can continue to form another hologram in the following recording process. Because the total amount of monomer is fixed, there is usually an efficiency tradeoff between multiplexed holograms. The final film sample would exhibit the wavefront modulation functions of multiple holograms (Fig. 3c ).

Liquid crystals have also been used to form HOEs. LCHOEs can generally be categorized into volume-recording type and surface-alignment type. Volume-recording type LCHOEs are either based on early polarization holography recordings with azo-polymer 30 , 31 , or holographic polymer-dispersed liquid crystals (HPDLCs) 32 , 33 formed by liquid-crystal-doped photopolymer. Surface-alignment type LCHOEs are based on photoalignment polarization holography (PAPH) 34 . The first step is to record the desired polarization pattern in a thin photoalignment layer, and the second step is to use it to align the bulk liquid crystal 25 , 35 . Due to the simple fabrication process, high efficiency, and low scattering from liquid crystal’s self-assembly nature, surface-alignment type LCHOEs based on PAPH have recently attracted increasing interest in applications like near-eye displays. Here, we shall focus on this type of surface-alignment LCHOE and refer to it as LCHOE thereafter for simplicity.

The formation of LCHOEs is illustrated in Fig. 3d . The information of the wavefront and the local diffraction pattern is recorded in a thin photoalignment layer. The volume liquid crystal deposited on the photoalignment layer, depending on whether it is nematic liquid crystal or cholesteric liquid crystal (CLC), forms a transmissive or a reflective LCHOE. In a transmissive LCHOE, the bulk nematic liquid crystal molecules generally follow the pattern of the bottom alignment layer. The smallest allowable pattern period is governed by the liquid crystal distortion-free energy model, which predicts the pattern period should generally be larger than sample thickness 36 , 37 . This results in a maximum diffraction angle under 20°. On the other hand, in a reflective LCHOE 38 , 39 , the bulk CLC molecules form a stable helical structure, which is tilted to match the k -vector of the bottom pattern. The structure exhibits a very low distorted free energy 40 , 41 and can accommodate a pattern period that is small enough to diffract light into the total internal reflection (TIR) of a glass substrate.

The diffraction property of LCHOEs is shown in Fig. 3e . The maximum refractive index modulation of LCHOE is equal to the liquid crystal birefringence (Δ n ), which may vary from 0.04 to 0.5, depending on the molecular conjugation 42 , 43 . The birefringence used in our simulation is Δ n  = 0.15. Compared to PPHOEs, the angular and spectral bandwidths are significantly larger for both transmissive and reflective LCHOEs. For a transmissive LCHOE, its angular bandwidth is around 20° ( λ  = 550 nm), while the spectral bandwidth is around 300 nm (7° incidence). For a reflective LCHOE, its spectral bandwidth is around 80 nm and angular bandwidth could vary from 15° to 50°, depending on the wavelength.

The anisotropic nature of liquid crystal leads to LCHOE’s unique polarization-dependent response to an incident light. As depicted in Fig. 3f , for a transmissive LCHOE the accumulated phase is opposite for the conjugated left-handed circular polarization (LCP) and right-handed circular polarization (RCP) states, leading to reversed diffraction directions. For a reflective LCHOE, the polarization dependency is similar to that of a normal CLC. For the circular polarization with the same handedness as the helical structure of CLC, the diffraction is strong. For the opposite circular polarization, the diffraction is negligible.

Another distinctive property of liquid crystal is its dynamic response to an external voltage. The LC reorientation can be controlled with a relatively low voltage (<10 V rms ) and the response time is on the order of milliseconds, depending mainly on the LC viscosity and layer thickness. Methods to dynamically control LCHOEs can be categorized as active addressing and passive addressing, which can be achieved by either directly switching the LCHOE or modulating the polarization state with an active waveplate. Detailed addressing methods will be described in the VAC section.

Lithography-enabled devices

Lithography technologies are used to create arbitrary patterns on wafers, which lays the foundation of the modern integrated circuit industry 44 . Photolithography is suitable for mass production while electron/ion beam lithography is usually used to create photomask for photolithography or to write structures with nanometer-scale feature size. Recent advances in lithography have enabled engineered structures like optical metasurfaces 45 , SRGs 46 , as well as micro-LED displays 47 . Metasurfaces exhibit a remarkable design freedom by varying the shape of meta-atoms, which can be utilized to achieve novel functions like achromatic focus 48 and beam steering 49 . Similarly, SRGs also offer a large design freedom by manipulating the geometry of local grating regions to realize desired optical properties. On the other hand, micro-LED exhibits several unique features, such as ultrahigh peak brightness, small aperture ratio, excellent stability, and nanosecond response time, etc. As a result, micro-LED is a promising candidate for AR and VR systems for achieving high ACR and high frame rate for suppressing motion image blurs. In the following section, we will briefly review the fabrication and properties of micro-LEDs and optical modulators like metasurfaces and SRGs.

Fabrication and properties of micro-LEDs

LEDs with a chip size larger than 300 μm have been widely used in solid-state lighting and public information displays. Recently, micro-LEDs with chip sizes <5 μm have been demonstrated 50 . The first micro-LED disc with a diameter of about 12 µm was demonstrated in 2000 51 . After that, a single color (blue or green) LED microdisplay was demonstrated in 2012 52 . The high peak brightness, fast response time, true dark state, and long lifetime of micro-LEDs are attractive for display applications. Therefore, many companies have since released their micro-LED prototypes or products, ranging from large-size TVs to small-size microdisplays for AR/VR applications 53 , 54 . Here, we focus on micro-LEDs for near-eye display applications. Regarding the fabrication of micro-LEDs, through the metal-organic chemical vapor deposition (MOCVD) method, the AlGaInP epitaxial layer is grown on GaAs substrate for red LEDs, and GaN epitaxial layers on sapphire substrate for green and blue LEDs. Next, a photolithography process is applied to define the mesa and deposit electrodes. To drive the LED array, the fabricated micro-LEDs are transferred to a CMOS (complementary metal oxide semiconductor) driver board. For a small size (<2 inches) microdisplay used in AR or VR, the precision of the pick-and-place transfer process is hard to meet the high-resolution-density (>1000 pixel per inch) requirement. Thus, the main approach to assemble LED chips with driving circuits is flip-chip bonding 50 , 55 , 56 , 57 , as Fig. 4a depicts. In flip-chip bonding, the mesa and electrode pads should be defined and deposited before the transfer process, while metal bonding balls should be preprocessed on the CMOS substrate. After that, thermal-compression method is used to bond the two wafers together. However, due to the thermal mismatch of LED chip and driving board, as the pixel size decreases, the misalignment between the LED chip and the metal bonding ball on the CMOS substrate becomes serious. In addition, the common n-GaN layer may cause optical crosstalk between pixels, which degrades the image quality. To overcome these issues, the LED epitaxial layer can be firstly metal-bonded with the silicon driver board, followed by the photolithography process to define the LED mesas and electrodes. Without the need for an alignment process, the pixel size can be reduced to <5 µm 50 .

figure 4

a Illustration of flip-chip bonding technology. b Simulated IQE-LED size relations for red and blue LEDs based on ABC model. c Comparison of EQE of different LED sizes with and without KOH and ALD side wall treatment. d Angular emission profiles of LEDs with different sizes. Metasurfaces based on e resonance-tuning, f non-resonance tuning and g combination of both. h Replication master and i replicated SRG based on nanoimprint lithography. Reproduced from a ref. 55 with permission from AIP Publishing, b ref. 61 with permission from PNAS, c ref. 66 with permission from IOP Publishing, d ref. 67 with permission from AIP Publishing, e ref. 69 with permission from OSA Publishing f ref. 48 with permission from AAAS g ref. 70 with permission from AAAS and h , i ref. 85 with permission from OSA Publishing

In addition to manufacturing process, the electrical and optical characteristics of LED also depend on the chip size. Generally, due to Shockley-Read-Hall (SRH) non-radiative recombination on the sidewall of active area, a smaller LED chip size results in a lower internal quantum efficiency (IQE), so that the peak IQE driving point will move toward a higher current density due to increased ratio of sidewall surface to active volume 58 , 59 , 60 . In addition, compared to the GaN-based green and blue LEDs, the AlGaInP-based red LEDs with a larger surface recombination and carrier diffusion length suffer a more severe efficiency drop 61 , 62 . Figure 4b shows the simulated result of IQE drop in relation with the LED chip size of blue and red LEDs based on ABC model 63 . To alleviate the efficiency drop caused by sidewall defects, depositing passivation materials by atomic layer deposition (ALD) or plasma enhanced chemical vapor deposition (PECVD) is proven to be helpful for both GaN and AlGaInP based LEDs 64 , 65 . In addition, applying KOH (Potassium hydroxide) treatment after ALD can further reduce the EQE drop of micro-LEDs 66 (Fig. 4c ). Small-size LEDs also exhibit some advantages, such as higher light extraction efficiency (LEE). Compared to an 100-µm LED, the LEE of a 2-µm LED increases from 12.2 to 25.1% 67 . Moreover, the radiation pattern of micro-LED is more directional than that of a large-size LED (Fig. 4d ). This helps to improve the lens collection efficiency in AR/VR display systems.

Metasurfaces and SGs

Thanks to the advances in lithography technology, low-loss dielectric metasurfaces working in the visible band have recently emerged as a platform for wavefront shaping 45 , 48 , 68 . They consist of an array of subwavelength-spaced structures with individually engineered wavelength-dependent polarization/phase/ amplitude response. In general, the light modulation mechanisms can be classified into resonant tuning 69 (Fig. 4e ), non-resonant tuning 48 (Fig. 4f ), and combination of both 70 (Fig. 4g ). In comparison with non-resonant tuning (based on geometric phase and/or dynamic propagation phase), the resonant tuning (such as Fabry–Pérot resonance, Mie resonance, etc.) is usually associated with a narrower operating bandwidth and a smaller out-of-plane aspect ratio (height/width) of nanostructures. As a result, they are easier to fabricate but more sensitive to fabrication tolerances. For both types, materials with a higher refractive index and lower absorption loss are beneficial to reduce the aspect ratio of nanostructure and improve the device efficiency. To this end, titanium dioxide (TiO 2 ) and gallium nitride (GaN) are the major choices for operating in the entire visible band 68 , 71 . While small-sized metasurfaces (diameter <1 mm) are usually fabricated via electron-beam lithography or focused ion beam milling in the labs, the ability of mass production is the key to their practical adoption. The deep ultraviolet (UV) photolithography has proven its feasibility for reproducing centimeter-size metalenses with decent imaging performance, while it requires multiple steps of etching 72 . Interestingly, the recently developed UV nanoimprint lithography based on a high-index nanocomposite only takes a single step and can obtain an aspect ratio larger than 10, which shows great promise for high-volume production 73 .

The arbitrary wavefront shaping capability and the thinness of the metasurfaces have aroused strong research interests in the development of novel AR/VR prototypes with improved performance. Lee et al. employed nanoimprint lithography to fabricate a centimeter-size, geometric-phase metalens eyepiece for full-color AR displays 74 . Through tailoring its polarization conversion efficiency and stacking with a circular polarizer, the virtual image can be superimposed with the surrounding scene. The large numerical aperture (NA~0.5) of the metalens eyepiece enables a wide FoV (>76°) that conventional optics are difficult to obtain. However, the geometric phase metalens is intrinsically a diffractive lens that also suffers from strong chromatic aberrations. To overcome this issue, an achromatic lens can be designed via simultaneously engineering the group delay and the group delay dispersion 75 , 76 , which will be described in detail later. Other novel and/or improved near-eye display architectures include metasurface-based contact lens-type AR 77 , achromatic metalens array enabled integral-imaging light field displays 78 , wide FoV lightguide AR with polarization-dependent metagratings 79 , and off-axis projection-type AR with an aberration-corrected metasurface combiner 80 , 81 , 82 . Nevertheless, from the existing AR/VR prototypes, metasurfaces still face a strong tradeoff between numerical aperture (for metalenses), chromatic aberration, monochromatic aberration, efficiency, aperture size, and fabrication complexity.

On the other hand, SRGs are diffractive gratings that have been researched for decades as input/output couplers of waveguides 83 , 84 . Their surface is composed of corrugated microstructures, and different shapes including binary, blazed, slanted, and even analogue can be designed. The parameters of the corrugated microstructures are determined by the target diffraction order, operation spectral bandwidth, and angular bandwidth. Compared to metasurfaces, SRGs have a much larger feature size and thus can be fabricated via UV photolithography and subsequent etching. They are usually replicated by nanoimprint lithography with appropriate heating and surface treatment. According to a report published a decade ago, SRGs with a height of 300 nm and a slant angle of up to 50° can be faithfully replicated with high yield and reproducibility 85 (Fig. 4g, h ).

Challenges and solutions of VR displays

The fully immersive nature of VR headset leads to a relatively fixed configuration where the display panel is placed in front of the viewer’s eye and an imaging optics is placed in-between. Regarding the system performance, although inadequate angular resolution still exists in some current VR headsets, the improvement of display panel resolution with advanced fabrication process is expected to solve this issue progressively. Therefore, in the following discussion, we will mainly focus on two major challenges: form factor and 3D cue generation.

Form factor

Compact and lightweight near-eye displays are essential for a comfortable user experience and therefore highly desirable in VR headsets. Current mainstream VR headsets usually have a considerably larger volume than eyeglasses, and most of the volume is just empty. This is because a certain distance is required between the display panel and the viewing optics, which is usually close to the focal length of the lens system as illustrated in Fig. 5a . Conventional VR headsets employ a transmissive lens with ~4 cm focal length to offer a large FoV and eyebox. Fresnel lenses are thinner than conventional ones, but the distance required between the lens and the panel does not change significantly. In addition, the diffraction artifacts and stray light caused by the Fresnel grooves can degrade the image quality, or MTF. Although the resolution density, quantified as pixel per inch (PPI), of current VR headsets is still limited, eventually Fresnel lens will not be an ideal solution when a high PPI display is available. The strong chromatic aberration of Fresnel singlet should also be compensated if a high-quality imaging system is preferred.

figure 5

a Schematic of a basic VR optical configuration. b Achromatic metalens used as VR eyepiece. c VR based on curved display and lenslet array. d Basic working principle of a VR display based on pancake optics. e VR with pancake optics and Fresnel lens array. f VR with pancake optics based on purely HOEs. Reprinted from b ref. 87 under the Creative Commons Attribution 4.0 License. Adapted from c ref. 88 with permission from IEEE, e ref. 91 and f ref. 92 under the Creative Commons Attribution 4.0 License

It is tempting to replace the refractive elements with a single thin diffractive lens like a transmissive LCHOE. However, the diffractive nature of such a lens will result in serious color aberrations. Interestingly, metalenses can fulfil this objective without color issues. To understand how metalenses achieve achromatic focus, let us first take a glance at the general lens phase profile \(\Phi (\omega ,r)\) expanded as a Taylor series 75 :

where \(\varphi _0(\omega )\) is the phase at the lens center, \(F\left( \omega \right)\) is the focal length as a function of frequency ω , r is the radial coordinate, and \(\omega _0\) is the central operation frequency. To realize achromatic focus, \(\partial F{{{\mathrm{/}}}}\partial \omega\) should be zero. With a designed focal length, the group delay \(\partial \Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega\) and the group delay dispersion \(\partial ^2\Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega ^2\) can be determined, and \(\varphi _0(\omega )\) is an auxiliary degree of freedom of the phase profile design. In the design of an achromatic metalens, the group delay is a function of the radial coordinate and monotonically increases with the metalens radius. Many designs have proven that the group delay has a limited variation range 75 , 76 , 78 , 86 . According to Shrestha et al. 86 , there is an inevitable tradeoff between the maximum radius of the metalens, NA, and operation bandwidth. Thus, the reported achromatic metalenses at visible usually have limited lens aperture (e.g., diameter < 250 μm) and NA (e.g., <0.2). Such a tradeoff is undesirable in VR displays, as the eyepiece favors a large clear aperture (inch size) and a reasonably high NA (>0.3) to maintain a wide FoV and a reasonable eye relief 74 .

To overcome this limitation, Li et al. 87 proposed a novel zone lens method. Unlike the traditional phase Fresnel lens where the zones are determined by the phase reset, the new approach divides the zones by the group delay reset. In this way, the lens aperture and NA can be much enlarged, and the group delay limit is bypassed. A notable side effect of this design is the phase discontinuity at zone boundaries that will contribute to higher-order focusing. Therefore, significant efforts have been conducted to find the optimal zone transition locations and to minimize the phase discontinuities. Using this method, they have demonstrated an impressive 2-mm-diameter metalens with NA = 0.7 and nearly diffraction-limited focusing for the designed wavelengths (488, 532, 658 nm) (Fig. 5b ). Such a metalens consists of 681 zones and works for the visible band ranging from 470 to 670 nm, though the focusing efficiency is in the order of 10%. This is a great starting point for the achromatic metalens to be employed as a compact, chromatic-aberration-free eyepiece in near-eye displays. Future challenges are how to further increase the aperture size, correct the off-axis aberrations, and improve the optical efficiency.

Besides replacing the refractive lens with an achromatic metalens, another way to reduce system focal length without decreasing NA is to use a lenslet array 88 . As depicted in Fig. 5c , both the lenslet array and display panel adopt a curved structure. With the latest flexible OLED panel, the display can be easily curved in one dimension. The system exhibits a large diagonal FoV of 180° with an eyebox of 19 by 12 mm. The geometry of each lenslet is optimized separately to achieve an overall performance with high image quality and reduced distortions.

Aside from trying to shorten the system focal length, another way to reduce total track is to fold optical path. Recently, polarization-based folded lenses, also known as pancake optics, are under active development for VR applications 89 , 90 . Figure 5d depicts the structure of an exemplary singlet pancake VR lens system. The pancake lenses can offer better imaging performance with a compact form factor since there are more degrees of freedom in the design and the actual light path is folded thrice. By using a reflective surface with a positive power, the field curvature of positive refractive lenses can be compensated. Also, the reflective surface has no chromatic aberrations and it contributes considerable optical power to the system. Therefore, the optical power of refractive lenses can be smaller, resulting in an even weaker chromatic aberration. Compared to Fresnel lenses, the pancake lenses have smooth surfaces and much fewer diffraction artifacts and stray light. However, such a pancake lens design is not perfect either, whose major shortcoming is low light efficiency. With two incidences of light on the half mirror, the maximum system efficiency is limited to 25% for a polarized input and 12.5% for an unpolarized input light. Moreover, due to the existence of multiple surfaces in the system, stray light caused by surface reflections and polarization leakage may lead to apparent ghost images. As a result, the catadioptric pancake VR headset usually manifests a darker imagery and lower contrast than the corresponding dioptric VR.

Interestingly, the lenslet and pancake optics can be combined to further reduce the system form. Bang et al. 91 demonstrated a compact VR system with a pancake optics and a Fresnel lenslet array. The pancake optics serves to fold the optical path between the display panel and the lenslet array (Fig. 5e ). Another Fresnel lens is used to collect the light from the lenslet array. The system has a decent horizontal FoV of 102° and an eyebox of 8 mm. However, a certain degree of image discontinuity and crosstalk are still present, which can be improved with further optimizations on the Fresnel lens and the lenslet array.

One step further, replacing all conventional optics in catadioptric VR headset with holographic optics can make the whole system even thinner. Maimone and Wang demonstrated such a lightweight, high-resolution, and ultra-compact VR optical system using purely HOEs 92 . This holographic VR optics was made possible by combining several innovative optical components, including a reflective PPHOE, a reflective LCHOE, and a PPHOE-based directional backlight with laser illumination, as shown in Fig. 5f . Since all the optical power is provided by the HOEs with negligible weight and volume, the total physical thickness can be reduced to <10 mm. Also, unlike conventional bulk optics, the optical power of a HOE is independent of its thickness, only subject to the recording process. Another advantage of using holographic optical devices is that they can be engineered to offer distinct phase profiles for different wavelengths and angles of incidence, adding extra degrees of freedom in optical designs for better imaging performance. Although only a single-color backlight has been demonstrated, such a PPHOE has the potential to achieve full-color laser backlight with multiplexing ability. The PPHOE and LCHOE in the pancake optics can also be optimized at different wavelengths for achieving high-quality full-color images.

Vergence-accommodation conflict

Conventional VR displays suffer from VAC, which is a common issue for stereoscopic 3D displays 93 . In current VR display modules, the distance between the display panel and the viewing optics is fixed, which means the VR imagery is displayed at a single depth. However, the image contents are generated by parallax rendering in three dimensions, offering distinct images for two eyes. This approach offers a proper stimulus to vergence but completely ignores the accommodation cue, which leads to the well-known VAC that can cause an uncomfortable user experience. Since the beginning of this century, numerous methods have been proposed to solve this critical issue. Methods to produce accommodation cue include multifocal/varifocal display 94 , holographic display 95 , and integral imaging display 96 . Alternatively, elimination of accommodation cue using a Maxwellian-view display 93 also helps to mitigate the VAC. However, holographic displays and Maxwellian-view displays generally require a totally different optical architecture than current VR systems. They are therefore more suitable for AR displays, which will be discussed later. Integral imaging, on the other hand, has an inherent tradeoff between view number and resolution. For current VR headsets pursuing high resolution to match human visual acuity, it may not be an appealing solution. Therefore, multifocal/varifocal displays that rely on depth modulation is a relatively practical and effective solution for VR headsets. Regarding the working mechanism, multifocal displays present multiple images with different depths to imitate the original 3D scene. Varifocal displays, in contrast, only show one image at each time frame. The image depth matches the viewer’s vergence depth. Nonetheless, the pre-knowledge of the viewer’s vergence depth requires an additional eye-tracking module. Despite different operation principles, a varifocal display can often be converted to a multifocal display as long as the varifocal module has enough modulation bandwidth to support multiple depths in a time frame.

To achieve depth modulation in a VR system, traditional liquid lens 97 , 98 with tunable focus suffers from the small aperture and large aberrations. Alvarez lens 99 is another tunable-focus solution but it requires mechanical adjustment, which adds to system volume and complexity. In comparison, transmissive LCHOEs with polarization dependency can achieve focus adjustment with electronic driving. Its ultra-thinness also satisfies the requirement of small form factors in VR headsets. The diffractive behavior of transmissive LCHOEs is often interpreted by the mechanism of Pancharatnam-Berry phase (also known as geometric phase) 100 . They are therefore often called Pancharatnam-Berry optical elements (PBOEs). The corresponding lens component is referred as Pancharatnam-Berry lens (PBL).

Two main approaches are used to switch the focus of a PBL, active addressing and passive addressing. In active addressing, the PBL itself (made of LC) can be switched by an applied voltage (Fig. 6a ). The optical power of the liquid crystal PBLs can be turned-on and -off by controlling the voltage. Stacking multiple active PBLs can produce 2 N depths, where N is the number of PBLs. The drawback of using active PBLs, however, is the limited spectral bandwidth since their diffraction efficiency is usually optimized at a single wavelength. In passive addressing, the depth modulation is achieved through changing the polarization state of input light by a switchable half-wave plate (HWP) (Fig. 6b ). The focal length can therefore be switched thanks to the polarization sensitivity of PBLs. Although this approach has a slightly more complicated structure, the overall performance can be better than the active one, because the PBLs made of liquid crystal polymer can be designed to manifest high efficiency within the entire visible spectrum 101 , 102 .

figure 6

Working principles of a depth switching PBL module based on a active addressing and b passive addressing. c A four-depth multifocal display based on time multiplexing. d A two-depth multifocal display based on polarization multiplexing. Reproduced from c ref. 103 with permission from OSA Publishing and d ref. 104 with permission from OSA Publishing

With the PBL module, multifocal displays can be built using time-multiplexing technique. Zhan et al. 103 demonstrated a four-depth multifocal display using two actively switchable liquid crystal PBLs (Fig. 6c ). The display is synchronized with the PBL module, which lowers the frame rate by the number of depths. Alternatively, multifocal displays can also be achieved by polarization-multiplexing, as demonstrated by Tan et al. 104 . The basic principle is to adjust the polarization state of local pixels so the image content on two focal planes of a PBL can be arbitrarily controlled (Fig. 6d ). The advantage of polarization multiplexing is that it does not sacrifice the frame rate, but it can only support two planes because only two orthogonal polarization states are available. Still, it can be combined with time-multiplexing to reduce the frame rate sacrifice by half. Naturally, varifocal displays can also be built with a PBL module. A fast-response 64-depth varifocal module with six PBLs has been demonstrated 105 .

The compact structure of PBL module leads to a natural solution of integrating it with above-mentioned pancake optics. A compact VR headset with dynamic depth modulation to solve VAC is therefore possible in practice. Still, due to the inherent diffractive nature of PBL, the PBL module face the issue of chromatic dispersion of focal length. To compensate for different focal depths for RGB colors may require additional digital corrections in image-rendering.

Architectures of AR displays

Unlike VR displays with a relatively fixed optical configuration, there exist a vast number of architectures in AR displays. Therefore, instead of following the narrative of tackling different challenges, a more appropriate way to review AR displays is to separately introduce each architecture and discuss its associated engineering challenges. An AR display usually consists of a light engine and an optical combiner. The light engine serves as display image source, while the combiner delivers the displayed images to viewer’s eye and in the meantime transmits the environment light. Some performance parameters like frame rate and power consumption are mainly determined by the light engine. Parameters like FoV, eyebox and MTF are primarily dependent on the combiner optics. Moreover, attributes like image brightness, overall efficiency, and form factor are influenced by both light engine and combiner. In this section, we will firstly discuss the light engine, where the latest advances in micro-LED on chip are reviewed and compared with existing microdisplay systems. Then, we will introduce two main types of combiners: free-space combiner and waveguide combiner.

Light engine

The light engine determines several essential properties of the AR system like image brightness, power consumption, frame rate, and basic etendue. Several types of microdisplays have been used in AR, including micro-LED, micro-organic-light-emitting-diodes (micro-OLED), liquid-crystal-on-silicon (LCoS), digital micromirror device (DMD), and laser beam scanning (LBS) based on micro-electromechanical system (MEMS). We will firstly describe the working principles of these devices and then analyze their performance. For those who are more interested in final performance parameters than details, Table 1 provides a comprehensive summary.

Working principles

Micro-LED and micro-OLED are self-emissive display devices. They are usually more compact than LCoS and DMD because no illumination optics is required. The fundamentally different material systems of LED and OLED lead to different approaches to achieve full-color displays. Due to the “green gap” in LEDs, red LEDs are manufactured on a different semiconductor material from green and blue LEDs. Therefore, how to achieve full-color display in high-resolution density microdisplays is quite a challenge for micro-LEDs. Among several solutions under research are two main approaches. The first is to combine three separate red, green and blue (RGB) micro-LED microdisplay panels 106 . Three single-color micro-LED microdisplays are manufactured separately through flip-chip transfer technology. Then, the projected images from three microdisplay panels are integrated by a trichroic prism (Fig. 7a ).

figure 7

a RGB micro-LED microdisplays combined by a trichroic prism. b QD-based micro-LED microdisplay. c Micro-OLED display with 4032 PPI. Working principles of d LCoS, e DMD, and f MEMS-LBS display modules. Reprinted from a ref. 106 with permission from IEEE, b ref. 108 with permission from Chinese Laser Press, c ref. 121 with permission from Jon Wiley and Sons, d ref. 124 with permission from Spring Nature, e ref. 126 with permission from Springer and f ref. 128 under the Creative Commons Attribution 4.0 License

Another solution is to assemble color-conversion materials like quantum dot (QD) on top of blue or ultraviolet (UV) micro-LEDs 107 , 108 , 109 (Fig. 7b ). The quantum dot color filter (QDCF) on top of the micro-LED array is mainly fabricated by inkjet printing or photolithography 110 , 111 . However, the display performance of color-conversion micro-LED displays is restricted by the low color-conversion efficiency, blue light leakage, and color crosstalk. Extensive efforts have been conducted to improve the QD-micro-LED performance. To boost QD conversion efficiency, structure designs like nanoring 112 and nanohole 113 , 114 have been proposed, which utilize the Förster resonance energy transfer mechanism to transfer excessive excitons in the LED active region to QD. To prevent blue light leakage, methods using color filters or reflectors like distributed Bragg reflector (DBR) 115 and CLC film 116 on top of QDCF are proposed. Compared to color filters that absorb blue light, DBR and CLC film help recycle the leaked blue light to further excite QDs. Other methods to achieve full-color micro-LED display like vertically stacked RGB micro-LED array 61 , 117 , 118 and monolithic wavelength tunable nanowire LED 119 are also under investigation.

Micro-OLED displays can be generally categorized into RGB OLED and white OLED (WOLED). RGB OLED displays have separate sub-pixel structures and optical cavities, which resonate at the desirable wavelength in RGB channels, respectively. To deposit organic materials onto the separated RGB sub-pixels, a fine metal mask (FMM) that defines the deposition area is required. However, high-resolution RGB OLED microdisplays still face challenges due to the shadow effect during the deposition process through FMM. In order to break the limitation, a silicon nitride film with small shadow has been proposed as a mask for high-resolution deposition above 2000 PPI (9.3 µm) 120 .

WOLED displays use color filters to generate color images. Without the process of depositing patterned organic materials, a high-resolution density up to 4000 PPI has been achieved 121 (Fig. 7c ). However, compared to RGB OLED, the color filters in WOLED absorb about 70% of the emitted light, which limits the maximum brightness of the microdisplay. To improve the efficiency and peak brightness of WOLED microdisplays, in 2019 Sony proposed to apply newly designed cathodes (InZnO) and microlens arrays on OLED microdisplays, which increased the peak brightness from 1600 nits to 5000 nits 120 . In addition, OLEDWORKs has proposed a multi-stacked OLED 122 with optimized microcavities whose emission spectra match the transmission bands of the color filters. The multi-stacked OLED shows a higher luminous efficiency (cd/A), but also requires a higher driving voltage. Recently, by using meta-mirrors as bottom reflective anodes, patterned microcavities with more than 10,000 PPI have been obtained 123 . The high-resolution meta-mirrors generate different reflection phases in the RGB sub-pixels to achieve desirable resonant wavelengths. The narrow emission spectra from the microcavity help to reduce the loss from color filters or even eliminate the need of color filters.

LCoS and DMD are light-modulating displays that generate images by controlling the reflection of each pixel. For LCoS, the light modulation is achieved by manipulating the polarization state of output light through independently controlling the liquid crystal reorientation in each pixel 124 , 125 (Fig. 7d ). Both phase-only and amplitude modulators have been employed. DMD is an amplitude modulation device. The modulation is achieved through controlling the tilt angle of bi-stable micromirrors 126 (Fig. 7e ). To generate an image, both LCoS and DMD rely on the light illumination systems, with LED or laser as light source. For LCoS, the generation of color image can be realized either by RGB color filters on LCoS (with white LEDs) or color-sequential addressing (with RGB LEDs or lasers). However, LCoS requires a linearly polarized light source. For an unpolarized LED light source, usually, a polarization recycling system 127 is implemented to improve the optical efficiency. For a single-panel DMD, the color image is mainly obtained through color-sequential addressing. In addition, DMD does not require a polarized light so that it generally exhibits a higher efficiency than LCoS if an unpolarized light source is employed.

MEMS-based LBS 128 , 129 utilizes micromirrors to directly scan RGB laser beams to form two-dimensional (2D) images (Fig. 7f ). Different gray levels are achieved by pulse width modulation (PWM) of the employed laser diodes. In practice, 2D scanning can be achieved either through a 2D scanning mirror or two 1D scanning mirrors with an additional focusing lens after the first mirror. The small size of MEMS mirror offers a very attractive form factor. At the same time, the output image has a large depth-of-focus (DoF), which is ideal for projection displays. One shortcoming, though, is that the small system etendue often hinders its applications in some traditional display systems.

Comparison of light engine performance

There are several important parameters for a light engine, including image resolution, brightness, frame rate, contrast ratio, and form factor. The resolution requirement (>2K) is similar for all types of light engines. The improvement of resolution is usually accomplished through the manufacturing process. Thus, here we shall focus on other three parameters.

Image brightness usually refers to the measured luminance of a light-emitting object. This measurement, however, may not be accurate for a light engine as the light from engine only forms an intermediate image, which is not directly viewed by the user. On the other hand, to solely focus on the brightness of a light engine could be misleading for a wearable display system like AR. Nowadays, data projectors with thousands of lumens are available. But the power consumption is too high for a battery-powered wearable AR display. Therefore, a more appropriate way to evaluate a light engine’s brightness is to use luminous efficacy (lm/W) measured by dividing the final output luminous flux (lm) by the input electric power (W). For a self-emissive device like micro-LED or micro-OLED, the luminous efficacy is directly determined by the device itself. However, for LCoS and DMD, the overall luminous efficacy should take into consideration the light source luminous efficacy, the efficiency of illumination optics, and the efficiency of the employed spatial light modulator (SLM). For a MEMS LBS engine, the efficiency of MEMS mirror can be considered as unity so that the luminous efficacy basically equals to that of the employed laser sources.

As mentioned earlier, each light engine has a different scheme for generating color images. Therefore, we separately list luminous efficacy of each scheme for a more inclusive comparison. For micro-LEDs, the situation is more complicated because the EQE depends on the chip size. Based on previous studies 130 , 131 , 132 , 133 , we separately calculate the luminous efficacy for RGB micro-LEDs with chip size ≈ 20 µm. For the scheme of direct combination of RGB micro-LEDs, the luminous efficacy is around 5 lm/W. For QD-conversion with blue micro-LEDs, the luminous efficacy is around 10 lm/W with the assumption of 100% color conversion efficiency, which has been demonstrated using structure engineering 114 . For micro-OLEDs, the calculated luminous efficacy is about 4–8 lm/W 120 , 122 . However, the lifetime and EQE of blue OLED materials depend on the driving current. To continuously display an image with brightness higher than 10,000 nits may dramatically shorten the device lifetime. The reason we compare the light engine at 10,000 nits is that it is highly desirable to obtain 1000 nits for the displayed image in order to keep ACR>3:1 with a typical AR combiner whose optical efficiency is lower than 10%.

For an LCoS engine using a white LED as light source, the typical optical efficiency of the whole engine is around 10% 127 , 134 . Then the engine luminous efficacy is estimated to be 12 lm/W with a 120 lm/W white LED source. For a color sequential LCoS using RGB LEDs, the absorption loss from color filters is eliminated, but the luminous efficacy of RGB LED source is also decreased to about 30 lm/W due to lower efficiency of red and green LEDs and higher driving current 135 . Therefore, the final luminous efficacy of the color sequential LCoS engine is also around 10 lm/W. If RGB linearly polarized lasers are employed instead of LEDs, then the LCoS engine efficiency can be quite high due to the high degree of collimation. The luminous efficacy of RGB laser source is around 40 lm/W 136 . Therefore, the laser-based LCoS engine is estimated to have a luminous efficacy of 32 lm/W, assuming the engine optical efficiency is 80%. For a DMD engine with RGB LEDs as light source, the optical efficiency is around 50% 137 , 138 , which leads to a luminous efficacy of 15 lm/W. By switching to laser light sources, the situation is similar to LCoS, with the luminous efficacy of about 32 lm/W. Finally, for MEMS-based LBS engine, there is basically no loss from the optics so that the final luminous efficacy is 40 lm/W. Detailed calculations of luminous efficacy can be found in Supplementary Information .

Another aspect of a light engine is the frame rate, which determines the volume of information it can deliver in a unit time. A high volume of information is vital for the construction of a 3D light field to solve the VAC issue. For micro-LEDs, the device response time is around several nanoseconds, which allows for visible light communication with bandwidth up to 1.5 Gbit/s 139 . For an OLED microdisplay, a fast OLED with ~200 MHz bandwidth has been demonstrated 140 . Therefore, the limitation of frame rate is on the driving circuits for both micro-LED and OLED. Another fact concerning driving circuit is the tradeoff between resolution and frame rate as a higher resolution panel means more scanning lines in each frame. So far, an OLED display with 480 Hz frame rate has been demonstrated 141 . For an LCoS, the frame rate is mainly limited by the LC response time. Depending on the LC material used, the response time is around 1 ms for nematic LC or 200 µs for ferroelectric LC (FLC) 125 . Nematic LC allows analog driving, which accommodates gray levels, typically with 8-bit depth. FLC is bistable so that PWM is used to generate gray levels. DMD is also a binary device. The frame rate can reach 30 kHz, which is mainly constrained by the response time of micromirrors. For MEMS-based LBS, the frame rate is limited by the scanning frequency of MEMS mirrors. A frame rate of 60 Hz with around 1 K resolution already requires a resonance frequency of around 50 kHz, with a Q-factor up to 145,000 128 . A higher frame rate or resolution requires a higher Q-factor and larger laser modulation bandwidth, which may be challenging.

Form factor is another crucial aspect for the light engines of near-eye displays. For self-emissive displays, both micro-OLEDs and QD-based micro-LEDs can achieve full color with a single panel. Thus, they are quite compact. A micro-LED display with separate RGB panels naturally have a larger form factor. In applications requiring direct-view full-color panel, the extra combining optics may also increase the volume. It needs to be pointed out, however, that the combing optics may not be necessary for some applications like waveguide displays, because the EPE process results in system’s insensitivity to the spatial positions of input RGB images. Therefore, the form factor of using three RGB micro-LED panels is medium. For LCoS and DMD with RGB LEDs as light source, the form factor would be larger due to the illumination optics. Still, if a lower luminous efficacy can be accepted, then a smaller form factor can be achieved by using a simpler optics 142 . If RGB lasers are used, the collimation optics can be eliminated, which greatly reduces the form factor 143 . For MEMS-LBS, the form factor can be extremely compact due to the tiny size of MEMS mirror and laser module.

Finally, contrast ratio (CR) also plays an important role affecting the observed images 8 . Micro-LEDs and micro-OLEDs are self-emissive so that their CR can be >10 6 :1. For a laser beam scanner, its CR can also achieve 10 6 :1 because the laser can be turned off completely at dark state. On the other hand, LCoS and DMD are reflective displays, and their CR is around 2000:1 to 5000:1 144 , 145 . It is worth pointing out that the CR of a display engine plays a significant role only in the dark ambient. As the ambient brightness increases, the ACR is mainly governed by the display’s peak brightness, as previously discussed.

The performance parameters of different light engines are summarized in Table 1 . Micro-LEDs and micro-OLEDs have similar levels of luminous efficacy. But micro-OLEDs still face the burn-in and lifetime issue when driving at a high current, which hinders its use for a high-brightness image source to some extent. Micro-LEDs are still under active development and the improvement on luminous efficacy from maturing fabrication process could be expected. Both devices have nanosecond response time and can potentially achieve a high frame rate with a well-designed integrated circuit. The frame rate of the driving circuit ultimately determines the motion picture response time 146 . Their self-emissive feature also leads to a small form factor and high contrast ratio. LCoS and DMD engines have similar performance of luminous efficacy, form factor, and contrast ratio. In terms of light modulation, DMD can provide a higher 1-bit frame rate, while LCoS can offer both phase and amplitude modulations. MEMS-based LBS exhibits the highest luminous efficacy so far. It also exhibits an excellent form factor and contrast ratio, but the presently demonstrated 60-Hz frame rate (limited by the MEMS mirrors) could cause image flickering.

Free-space combiners

The term ‘free-space’ generally refers to the case when light is freely propagating in space, as opposed to a waveguide that traps light into TIRs. Regarding the combiner, it can be a partial mirror, as commonly used in AR systems based on traditional geometric optics. Alternatively, the combiner can also be a reflective HOE. The strong chromatic dispersion of HOE necessitates the use of a laser source, which usually leads to a Maxwellian-type system.

Traditional geometric designs

Several systems based on geometric optics are illustrated in Fig. 8 . The simplest design uses a single freeform half-mirror 6 , 147 to directly collimate the displayed images to the viewer’s eye (Fig. 8a ). This design can achieve a large FoV (up to 90°) 147 , but the limited design freedom with a single freeform surface leads to image distortions, also called pupil swim 6 . The placement of half-mirror also results in a relatively bulky form factor. Another design using so-called birdbath optics 6 , 148 is shown in Fig. 8b . Compared to the single-combiner design, birdbath design has an extra optics on the display side, which provides space for aberration correction. The integration of beam splitter provides a folded optical path, which reduces the form factor to some extent. Another way to fold optical path is to use a TIR-prism. Cheng et al. 149 designed a freeform TIR-prism combiner (Fig. 8c ) offering a diagonal FoV of 54° and exit pupil diameter of 8 mm. All the surfaces are freeform, which offer an excellent image quality. To cancel the optical power for the transmitted environmental light, a compensator is added to the TIR prism. The whole system has a well-balanced performance between FoV, eyebox, and form factor. To release the space in front of viewer’s eye, relay optics can be used to form an intermediate image near the combiner 150 , 151 , as illustrated in Fig. 8d . Although the design offers more optical surfaces for aberration correction, the extra lenses also add to system weight and form factor.

figure 8

a Single freeform surface as the combiner. b Birdbath optics with a beam splitter and a half mirror. c Freeform TIR prism with a compensator. d Relay optics with a half mirror. Adapted from c ref. 149 with permission from OSA Publishing and d ref. 151 with permission from OSA Publishing

Regarding the approaches to solve the VAC issue, the most straightforward way is to integrate a tunable lens into the optical path, like a liquid lens 152 or Alvarez lens 99 , to form a varifocal system. Alternatively, integral imaging 153 , 154 can also be used, by replacing the original display panel with the central depth plane of an integral imaging module. The integral imaging can also be combined with varifocal approach to overcome the tradeoff between resolution and depth of field (DoF) 155 , 156 , 157 . However, the inherent tradeoff between resolution and view number still exists in this case.

Overall, AR displays based on traditional geometric optics have a relatively simple design with a decent FoV (~60°) and eyebox (8 mm) 158 . They also exhibit a reasonable efficiency. To measure the efficiency of an AR combiner, an appropriate measure is to divide the output luminance (unit: nit) by the input luminous flux (unit: lm), which we note as combiner efficiency. For a fixed input luminous flux, the output luminance, or image brightness, is related to the FoV and exit pupil of the combiner system. If we assume no light waste of the combiner system, then the maximum combiner efficiency for a typical diagonal FoV of 60° and exit pupil (10 mm square) is around 17,000 nit/lm (Eq. S2 ). To estimate the combiner efficiency of geometric combiners, we assume 50% of half-mirror transmittance and the efficiency of other optics to be 50%. Then the final combiner efficiency is about 4200 nit/lm, which is a high value in comparison with waveguide combiners. Nonetheless, to further shrink the system size or improve system performance ultimately encounters the etendue conservation issue. In addition, AR systems with traditional geometric optics is hard to achieve a configuration resembling normal flat glasses because the half-mirror has to be tilted to some extent.

Maxwellian-type systems

The Maxwellian view, proposed by James Clerk Maxwell (1860), refers to imaging a point light source in the eye pupil 159 . If the light beam is modulated in the imaging process, a corresponding image can be formed on the retina (Fig. 9a ). Because the point source is much smaller than the eye pupil, the image is always-in-focus on the retina irrespective of the eye lens’ focus. For applications in AR display, the point source is usually a laser with narrow angular and spectral bandwidths. LED light sources can also build a Maxwellian system, by adding an angular filtering module 160 . Regarding the combiner, although in theory a half-mirror can also be used, HOEs are generally preferred because they offer the off-axis configuration that places combiner in a similar position like eyeglasses. In addition, HOEs have a lower reflection of environment light, which provides a more natural appearance of the user behind the display.

figure 9

a Schematic of the working principle of Maxwellian displays. Maxwellian displays based on b SLM and laser diode light source and c MEMS-LBS with a steering mirror as additional modulation method. Generation of depth cues by d computational digital holography and e scanning of steering mirror to produce multiple views. Adapted from b, d ref. 143 and c, e ref. 167 under the Creative Commons Attribution 4.0 License

To modulate the light, a SLM like LCoS or DMD can be placed in the light path, as shown in Fig. 9b . Alternatively, LBS system can also be used (Fig. 9c ), where the intensity modulation occurs in the laser diode itself. Besides the operation in a normal Maxwellian-view, both implementations offer additional degrees of freedom for light modulation.

For a SLM-based system, there are several options to arrange the SLM pixels 143 , 161 . Maimone et al. 143 demonstrated a Maxwellian AR display with two modes to offer a large-DoF Maxwellian-view, or a holographic view (Fig. 9d ), which is often referred as computer-generated holography (CGH) 162 . To show an always-in-focus image with a large DoF, the image can be directly displayed on an amplitude SLM, or using amplitude encoding for a phase-only SLM 163 . Alternatively, if a 3D scene with correct depth cues is to be presented, then optimization algorithms for CGH can be used to generate a hologram for the SLM. The generated holographic image exhibits the natural focus-and-blur effect like a real 3D object (Fig. 9d ). To better understand this feature, we need to again exploit the concept of etendue. The laser light source can be considered to have a very small etendue due to its excellent collimation. Therefore, the system etendue is provided by the SLM. The micron-sized pixel-pitch of SLM offers a certain maximum diffraction angle, which, multiplied by the SLM size, equals system etendue. By varying the display content on SLM, the final exit pupil size can be changed accordingly. In the case of a large-DoF Maxwellian view, the exit pupil size is small, accompanied by a large FoV. For the holographic display mode, the reduced DoF requires a larger exit pupil with dimension close to the eye pupil. But the FoV is reduced accordingly due to etendue conservation. Another commonly concerned issue with CGH is the computation time. To achieve a real-time CGH rendering flow with an excellent image quality is quite a challenge. Fortunately, with recent advances in algorithm 164 and the introduction of convolutional neural network (CNN) 165 , 166 , this issue is gradually solved with an encouraging pace. Lately, Liang et al. 166 demonstrated a real-time CGH synthesis pipeline with a high image quality. The pipeline comprises an efficient CNN model to generate a complex hologram from a 3D scene and an improved encoding algorithm to convert the complex hologram to a phase-only one. An impressive frame rate of 60 Hz has been achieved on a desktop computing unit.

For LBS-based system, the additional modulation can be achieved by integrating a steering module, as demonstrated by Jang et al. 167 . The steering mirror can shift the focal point (viewpoint) within the eye pupil, therefore effectively expanding the system etendue. When the steering process is fast and the image content is updated simultaneously, correct 3D cues can be generated, as shown in Fig. 9e . However, there exists a tradeoff between the number of viewpoint and the final image frame rate, because the total frames are equally divided into each viewpoint. To boost the frame rate of MEMS-LBS systems by the number of views (e.g., 3 by 3) may be challenging.

Maxwellian-type systems offer several advantages. The system efficiency is usually very high because nearly all the light is delivered into viewer’s eye. The system FoV is determined by the f /# of combiner and a large FoV (~80° in horizontal) can be achieved 143 . The issue of VAC can be mitigated with an infinite-DoF image that deprives accommodation cue, or completely solved by generating a true-3D scene as discussed above. Despite these advantages, one major weakness of Maxwellian-type system is the tiny exit pupil, or eyebox. A small deviation of eye pupil location from the viewpoint results in the complete disappearance of the image. Therefore, to expand eyebox is considered as one of the most important challenges in Maxwellian-type systems.

Pupil duplication and steering

Methods to expand eyebox can be generally categorized into pupil duplication 168 , 169 , 170 , 171 , 172 and pupil steering 9 , 13 , 167 , 173 . Pupil duplication simply generates multiple viewpoints to cover a large area. In contrast, pupil steering dynamically shifts the viewpoint position, depending on the pupil location. Before reviewing detailed implementations of these two methods, it is worth discussing some of their general features. The multiple viewpoints in pupil duplication usually mean to equally divide the total light intensity. In each time frame, however, it is preferable that only one viewpoint enters the user’s eye pupil to avoid ghost image. This requirement, therefore, results in a reduced total light efficiency, while also conditioning the viewpoint separation to be larger than the pupil diameter. In addition, the separation should not be too large to avoid gap between viewpoints. Considering that human pupil diameter changes in response to environment illuminance, the design of viewpoint separation needs special attention. Pupil steering, on the other hand, only produces one viewpoint at each time frame. It is therefore more light-efficient and free from ghost images. But to determine the viewpoint position requires the information of eye pupil location, which demands a real-time eye-tracking module 9 . Another observation is that pupil steering can accommodate multiple viewpoints by its nature. Therefore, a pupil steering system can often be easily converted to a pupil duplication system by simultaneously generating available viewpoints.

To generate multiple viewpoints, one can focus on modulating the incident light or the combiner. Recall that viewpoint is the image of light source. To duplicate or shift light source can achieve pupil duplication or steering accordingly, as illustrated in Fig. 10a . Several schemes of light modulation are depicted in Fig. 10b–e . An array of light sources can be generated with multiple laser diodes (Fig. 10b ). To turn on all or one of the sources achieves pupil duplication or steering. A light source array can also be produced by projecting light on an array-type PPHOE 168 (Fig. 10c ). Apart from direct adjustment of light sources, modulating light on the path can also effectively steer/duplicate the light sources. Using a mechanical steering mirror, the beam can be deflected 167 (Fig. 10d ), which equals to shifting the light source position. Other devices like a grating or beam splitter can also serve as ray deflector/splitter 170 , 171 (Fig. 10e ).

figure 10

a Schematic of duplicating (or shift) viewpoint by modulation of incident light. Light modulation by b multiple laser diodes, c HOE lens array, d steering mirror and e grating or beam splitters. f Pupil duplication with multiplexed PPHOE. g Pupil steering with LCHOE. Reproduced from c ref. 168 under the Creative Commons Attribution 4.0 License, e ref. 169 with permission from OSA Publishing, f ref. 171 with permission from OSA Publishing and g ref. 173 with permission from OSA Publishing

Nonetheless, one problem of the light source duplication/shifting methods for pupil duplication/steering is that the aberrations in peripheral viewpoints are often serious 168 , 173 . The HOE combiner is usually recorded at one incident angle. For other incident angles with large deviations, considerable aberrations will occur, especially in the scenario of off-axis configuration. To solve this problem, the modulation can be focused on the combiner instead. While the mechanical shifting of combiner 9 can achieve continuous pupil steering, its integration into AR display with a small factor remains a challenge. Alternatively, the versatile functions of HOE offer possible solutions for combiner modulation. Kim and Park 169 demonstrated a pupil duplication system with multiplexed PPHOE (Fig. 10f ). Wavefronts of several viewpoints can be recorded into one PPHOE sample. Three viewpoints with a separation of 3 mm were achieved. However, a slight degree of ghost image and gap can be observed in the viewpoint transition. For a PPHOE to achieve pupil steering, the multiplexed PPHOE needs to record different focal points with different incident angles. If each hologram has no angular crosstalk, then with an additional device to change the light incident angle, the viewpoint can be steered. Alternatively, Xiong et al. 173 demonstrated a pupil steering system with LCHOEs in a simpler configuration (Fig. 10g ). The polarization-sensitive nature of LCHOE enables the controlling of which LCHOE to function with a polarization converter (PC). When the PC is off, the incident RCP light is focused by the right-handed LCHOE. When the PC is turned on, the RCP light is firstly converted to LCP light and passes through the right-handed LCHOE. Then it is focused by the left-handed LCHOE into another viewpoint. To add more viewpoints requires stacking more pairs of PC and LCHOE, which can be achieved in a compact manner with thin glass substrates. In addition, to realize pupil duplication only requires the stacking of multiple low-efficiency LCHOEs. For both PPHOEs and LCHOEs, because the hologram for each viewpoint is recorded independently, the aberrations can be eliminated.

Regarding the system performance, in theory the FoV is not limited and can reach a large value, such as 80° in horizontal direction 143 . The definition of eyebox is different from traditional imaging systems. For a single viewpoint, it has the same size as the eye pupil diameter. But due to the viewpoint steering/duplication capability, the total system eyebox can be expanded accordingly. The combiner efficiency for pupil steering systems can reach 47,000 nit/lm for a FoV of 80° by 80° and pupil diameter of 4 mm (Eq. S2 ). At such a high brightness level, eye safety could be a concern 174 . For a pupil duplication system, the combiner efficiency is decreased by the number of viewpoints. With a 4-by-4 viewpoint array, it can still reach 3000 nit/lm. Despite the potential gain of pupil duplication/steering, when considering the rotation of eyeball, the situation becomes much more complicated 175 . A perfect pupil steering system requires a 5D steering, which proposes a challenge for practical implementation.

Pin-light systems

Recently, another type of display in close relation with Maxwellian view called pin-light display 148 , 176 has been proposed. The general working principle of pin-light display is illustrated in Fig. 11a . Each pin-light source is a Maxwellian view with a large DoF. When the eye pupil is no longer placed near the source point as in Maxwellian view, each image source can only form an elemental view with a small FoV on retina. However, if the image source array is arranged in a proper form, the elemental views can be integrated together to form a large FoV. According to the specific optical architectures, pin-light display can take different forms of implementation. In the initial feasibility demonstration, Maimone et al. 176 used a side-lit waveguide plate as the point light source (Fig. 11b ). The light inside the waveguide plate is extracted by the etched divots, forming a pin-light source array. A transmissive SLM (LCD) is placed behind the waveguide plate to modulate the light intensity and form the image. The display has an impressive FoV of 110° thanks to the large scattering angle range. However, the direct placement of LCD before the eye brings issues of insufficient resolution density and diffraction of background light.

figure 11

a Schematic drawing of the working principle of pin-light display. b Pin-light display utilizing a pin-light source and a transmissive SLM. c An example of pin-mirror display with a birdbath optics. d SWD system with LBS image source and off-axis lens array. Reprinted from b ref. 176 under the Creative Commons Attribution 4.0 License and d ref. 180 with permission from OSA Publishing

To avoid these issues, architectures using pin-mirrors 177 , 178 , 179 are proposed. In these systems, the final combiner is an array of tiny mirrors 178 , 179 or gratings 177 , in contrast to their counterparts using large-area combiners. An exemplary system with birdbath design is depicted in Fig. 11c . In this case, the pin-mirrors replace the original beam-splitter in the birdbath and can thus shrink the system volume, while at the same time providing large DoF pin-light images. Nonetheless, such a system may still face the etendue conservation issue. Meanwhile, the size of pin-mirror cannot be too small in order to prevent degradation of resolution density due to diffraction. Therefore, its influence on the see-through background should also be considered in the system design.

To overcome the etendue conservation and improve see-through quality, Xiong et al. 180 proposed another type of pin-light system exploiting the etendue expansion property of waveguide, which is also referred as scanning waveguide display (SWD). As illustrated in Fig. 11d , the system uses an LBS as the image source. The collimated scanned laser rays are trapped in the waveguide and encounter an array of off-axis lenses. Upon each encounter, the lens out-couples the laser rays and forms a pin-light source. SWD has the merits of good see-through quality and large etendue. A large FoV of 100° was demonstrated with the help of an ultra-low f /# lens array based on LCHOE. However, some issues like insufficient image resolution density and image non-uniformity remain to be overcome. To further improve the system may require optimization of Gaussian beam profile and additional EPE module 180 .

Overall, pin-light systems inherit the large DoF from Maxwellian view. With adequate number of pin-light sources, the FoV and eyebox can be expanded accordingly. Nonetheless, despite different forms of implementation, a common issue of pin-light system is the image uniformity. The overlapped region of elemental views has a higher light intensity than the non-overlapped region, which becomes even more complicated considering the dynamic change of pupil size. In theory, the displayed image can be pre-processed to compensate for the optical non-uniformity. But that would require knowledge of precise pupil location (and possibly size) and therefore an accurate eye-tracking module 176 . Regarding the system performance, pin-mirror systems modified from other free-space systems generally shares similar FoV and eyebox with original systems. The combiner efficiency may be lower due to the small size of pin-mirrors. SWD, on the other hand, shares the large FoV and DoF with Maxwellian view, and large eyebox with waveguide combiners. The combiner efficiency may also be lower due to the EPE process.

Waveguide combiner

Besides free-space combiners, another common architecture in AR displays is waveguide combiner. The term ‘waveguide’ indicates the light is trapped in a substrate by the TIR process. One distinctive feature of a waveguide combiner is the EPE process that effectively enlarges the system etendue. In the EPE process, a portion of the trapped light is repeatedly coupled out of the waveguide in each TIR. The effective eyebox is therefore enlarged. According to the features of couplers, we divide the waveguide combiners into two types: diffractive and achromatic, as described in the followings.

Diffractive waveguides

As the name implies, diffractive-type waveguides use diffractive elements as couplers. The in-coupler is usually a diffractive grating and the out-coupler in most cases is also a grating with the same period as the in-coupler, but it can also be an off-axis lens with a small curvature to generate image with finite depth. Three major diffractive couplers have been developed: SRGs, photopolymer gratings (PPGs), and liquid crystal gratings (grating-type LCHOE; also known as polarization volume gratings (PVGs)). Some general protocols for coupler design are that the in-coupler should have a relatively high efficiency and the out-coupler should have a uniform light output. A uniform light output usually requires a low-efficiency coupler, with extra degrees of freedom for local modulation of coupling efficiency. Both in-coupler and out-coupler should have an adequate angular bandwidth to accommodate a reasonable FoV. In addition, the out-coupler should also be optimized to avoid undesired diffractions, including the outward diffraction of TIR light and diffraction of environment light into user’s eyes, which are referred as light leakage and rainbow. Suppression of these unwanted diffractions should also be considered in the optimization process of waveguide design, along with performance parameters like efficiency and uniformity.

The basic working principles of diffractive waveguide-based AR systems are illustrated in Fig. 12 . For the SRG-based waveguides 6 , 8 (Fig. 12a ), the in-coupler can be a transmissive-type or a reflective-type 181 , 182 . The grating geometry can be optimized for coupling efficiency with a large degree of freedom 183 . For the out-coupler, a reflective SRG with a large slant angle to suppress the transmission orders is preferred 184 . In addition, a uniform light output usually requires a gradient efficiency distribution in order to compensate for the decreased light intensity in the out-coupling process. This can be achieved by varying the local grating configurations like height and duty cycle 6 . For the PPG-based waveguides 185 (Fig. 12b ), the small angular bandwidth of a high-efficiency transmissive PPG prohibits its use as in-coupler. Therefore, both in-coupler and out-coupler are usually reflective types. The gradient efficiency can be achieved by space-variant exposure to control the local index modulation 186 or local Bragg slant angle variation through freeform exposure 19 . Due to the relatively small angular bandwidth of PPG, to achieve a decent FoV usually requires stacking two 187 or three 188 PPGs together for a single color. The PVG-based waveguides 189 (Fig. 12c ) also prefer reflective PVGs as in-couplers because the transmissive PVGs are much more difficult to fabricate due to the LC alignment issue. In addition, the angular bandwidth of transmissive PVGs in Bragg regime is also not large enough to support a decent FoV 29 . For the out-coupler, the angular bandwidth of a single reflective PVG can usually support a reasonable FoV. To obtain a uniform light output, a polarization management layer 190 consisting of a LC layer with spatially variant orientations can be utilized. It offers an additional degree of freedom to control the polarization state of the TIR light. The diffraction efficiency can therefore be locally controlled due to the strong polarization sensitivity of PVG.

figure 12

Schematics of waveguide combiners based on a SRGs, b PPGs and c PVGs. Reprinted from a ref. 85 with permission from OSA Publishing, b ref. 185 with permission from John Wiley and Sons and c ref. 189 with permission from OSA Publishing

The above discussion describes the basic working principle of 1D EPE. Nonetheless, for the 1D EPE to produce a large eyebox, the exit pupil in the unexpanded direction of the original image should be large. This proposes design challenges in light engines. Therefore, a 2D EPE is favored for practical applications. To extend EPE in two dimensions, two consecutive 1D EPEs can be used 191 , as depicted in Fig. 13a . The first 1D EPE occurs in the turning grating, where the light is duplicated in y direction and then turned into x direction. Then the light rays encounter the out-coupler and are expanded in x direction. To better understand the 2D EPE process, the k -vector diagram (Fig. 13b ) can be used. For the light propagating in air with wavenumber k 0 , its possible k -values in x and y directions ( k x and k y ) fall within the circle with radius k 0 . When the light is trapped into TIR, k x and k y are outside the circle with radius k 0 and inside the circle with radius nk 0 , where n is the refractive index of the substrate. k x and k y stay unchanged in the TIR process and are only changed in each diffraction process. The central red box in Fig. 13b indicates the possible k values within the system FoV. After the in-coupler, the k values are added by the grating k -vector, shifting the k values into TIR region. The turning grating then applies another k -vector and shifts the k values to near x -axis. Finally, the k values are shifted by the out-coupler and return to the free propagation region in air. One observation is that the size of red box is mostly limited by the width of TIR band. To accommodate a larger FoV, the outer boundary of TIR band needs to be expanded, which amounts to increasing waveguide refractive index. Another important fact is that when k x and k y are near the outer boundary, the uniformity of output light becomes worse. This is because the light propagation angle is near 90° in the waveguide. The spatial distance between two consecutive TIRs becomes so large that the out-coupled beams are spatially separated to an unacceptable degree. The range of possible k values for practical applications is therefore further shrunk due to this fact.

figure 13

a Schematic of 2D EPE based on two consecutive 1D EPEs. Gray/black arrows indicate light in air/TIR. Black dots denote TIRs. b k-diagram of the two-1D-EPE scheme. c Schematic of 2D EPE with a 2D hexagonal grating d k-diagram of the 2D-grating scheme

Aside from two consecutive 1D EPEs, the 2D EPE can also be directly implemented with a 2D grating 192 . An example using a hexagonal grating is depicted in Fig. 13c . The hexagonal grating can provide k -vectors in six directions. In the k -diagram (Fig. 13d ), after the in-coupling, the k values are distributed into six regions due to multiple diffractions. The out-coupling occurs simultaneously with pupil expansion. Besides a concise out-coupler configuration, the 2D EPE scheme offers more degrees of design freedom than two 1D EPEs because the local grating parameters can be adjusted in a 2D manner. The higher design freedom has the potential to reach a better output light uniformity, but at the cost of a higher computation demand for optimization. Furthermore, the unslanted grating geometry usually leads to a large light leakage and possibly low efficiency. Adding slant to the geometry helps alleviate the issue, but the associated fabrication may be more challenging.

Finally, we discuss the generation of full-color images. One important issue to clarify is that although diffractive gratings are used here, the final image generally has no color dispersion even if we use a broadband light source like LED. This can be easily understood in the 1D EPE scheme. The in-coupler and out-coupler have opposite k -vectors, which cancels the color dispersion for each other. In the 2D EPE schemes, the k -vectors always form a closed loop from in-coupled light to out-coupled light, thus, the color dispersion also vanishes likewise. The issue of using a single waveguide for full-color images actually exists in the consideration of FoV and light uniformity. The breakup of propagation angles for different colors results in varied out-coupling situations for each color. To be more specific, if the red and the blue channels use the same in-coupler, the propagating angle for the red light is larger than that of the blue light. The red light in peripheral FoV is therefore easier to face the mentioned large-angle non-uniformity issue. To acquire a decent FoV and light uniformity, usually two or three layers of waveguides with different grating pitches are adopted.

Regarding the system performance, the eyebox is generally large enough (~10 mm) to accommodate different user’s IPD and alignment shift during operation. A parameter of significant concern for a waveguide combiner is its FoV. From the k -vector analysis, we can conclude the theoretical upper limit is determined by the waveguide refractive index. But the light/color uniformity also influences the effective FoV, over which the degradation of image quality becomes unacceptable. Current diffractive waveguide combiners generally achieve a FoV of about 50°. To further increase FoV, a straightforward method is to use a higher refractive index waveguide. Another is to tile FoV through direct stacking of multiple waveguides or using polarization-sensitive couplers 79 , 193 . As to the optical efficiency, a typical value for the diffractive waveguide combiner is around 50–200 nit/lm 6 , 189 . In addition, waveguide combiners adopting grating out-couplers generate an image with fixed depth at infinity. This leads to the VAC issue. To tackle VAC in waveguide architectures, the most practical way is to generate multiple depths and use the varifocal or multifocal driving scheme, similar to those mentioned in the VR systems. But to add more depths usually means to stack multiple layers of waveguides together 194 . Considering the additional waveguide layers for RGB colors, the final waveguide thickness would undoubtedly increase.

Other parameters special to waveguide includes light leakage, see-through ghost, and rainbow. Light leakage refers to out-coupled light that goes outwards to the environment, as depicted in Fig. 14a . Aside from decreased efficiency, the leakage also brings drawback of unnatural “bright-eye” appearance of the user and privacy issue. Optimization of the grating structure like geometry of SRG may reduce the leakage. See-through ghost is formed by consecutive in-coupling and out-couplings caused by the out-coupler grating, as sketched in Fig. 14b , After the process, a real object with finite depth may produce a ghost image with shift in both FoV and depth. Generally, an out-coupler with higher efficiency suffers more see-through ghost. Rainbow is caused by the diffraction of environment light into user’s eye, as sketched in Fig. 14c . The color dispersion in this case will occur because there is no cancellation of k -vector. Using the k -diagram, we can obtain a deeper insight into the formation of rainbow. Here, we take the EPE structure in Fig. 13a as an example. As depicted in Fig. 14d , after diffractions by the turning grating and the out-coupler grating, the k values are distributed in two circles that shift from the origin by the grating k -vectors. Some diffracted light can enter the see-through FoV and form rainbow. To reduce rainbow, a straightforward way is to use a higher index substrate. With a higher refractive index, the outer boundary of k diagram is expanded, which can accommodate larger grating k -vectors. The enlarged k -vectors would therefore “push” these two circles outwards, leading to a decreased overlapping region with the see-through FoV. Alternatively, an optimized grating structure would also help reduce the rainbow effect by suppressing the unwanted diffraction.

figure 14

Sketches of formations of a light leakage, b see-through ghost and c rainbow. d Analysis of rainbow formation with k-diagram

Achromatic waveguide

Achromatic waveguide combiners use achromatic elements as couplers. It has the advantage of realizing full-color image with a single waveguide. A typical example of achromatic element is a mirror. The waveguide with partial mirrors as out-coupler is often referred as geometric waveguide 6 , 195 , as depicted in Fig. 15a . The in-coupler in this case is usually a prism to avoid unnecessary color dispersion if using diffractive elements otherwise. The mirrors couple out TIR light consecutively to produce a large eyebox, similarly in a diffractive waveguide. Thanks to the excellent optical property of mirrors, the geometric waveguide usually exhibits a superior image regarding MTF and color uniformity to its diffractive counterparts. Still, the spatially discontinuous configuration of mirrors also results in gaps in eyebox, which may be alleviated by using a dual-layer structure 196 . Wang et al. designed a geometric waveguide display with five partial mirrors (Fig. 15b ). It exhibits a remarkable FoV of 50° by 30° (Fig. 15c ) and an exit pupil of 4 mm with a 1D EPE. To achieve 2D EPE, similar architectures in Fig. 13a can be used by integrating a turning mirror array as the first 1D EPE module 197 . Unfortunately, the k -vector diagrams in Fig. 13b, d cannot be used here because the k values in x-y plane no longer conserve in the in-coupling and out-coupling processes. But some general conclusions remain valid, like a higher refractive index leading to a larger FoV and gradient out-coupling efficiency improving light uniformity.

figure 15

a Schematic of the system configuration. b Geometric waveguide with five partial mirrors. c Image photos demonstrating system FoV. Adapted from b , c ref. 195 with permission from OSA Publishing

The fabrication process of geometric waveguide involves coating mirrors on cut-apart pieces and integrating them back together, which may result in a high cost, especially for the 2D EPE architecture. Another way to implement an achromatic coupler is to use multiplexed PPHOE 198 , 199 to mimic the behavior of a tilted mirror (Fig. 16a ). To understand the working principle, we can use the diagram in Fig. 16b . The law of reflection states the angle of reflection equals to the angle of incidence. If we translate this behavior to k -vector language, it means the mirror can apply any length of k -vector along its surface normal direction. The k -vector length of the reflected light is always equal to that of the incident light. This puts a condition that the k -vector triangle is isosceles. With a simple geometric deduction, it can be easily observed this leads to the law of reflection. The behavior of a general grating, however, is very different. For simplicity we only consider the main diffraction order. The grating can only apply a k -vector with fixed k x due to the basic diffraction law. For the light with a different incident angle, it needs to apply different k z to produce a diffracted light with equal k -vector length as the incident light. For a grating with a broad angular bandwidth like SRG, the range of k z is wide, forming a lengthy vertical line in Fig. 16b . For a PPG with a narrow angular bandwidth, the line is short and resembles a dot. If multiple of these tiny dots are distributed along the oblique line corresponding to a mirror, then the final multiplexed PPGs can imitate the behavior of a tilted mirror. Such a PPHOE is sometimes referred as a skew-mirror 198 . In theory, to better imitate the mirror, a lot of multiplexed PPGs is preferred, while each PPG has a small index modulation δn . But this proposes a bigger challenge in device fabrication. Recently, Utsugi et al. demonstrated an impressive skew-mirror waveguide based on 54 multiplexed PPGs (Fig. 16c, d ). The display exhibits an effective FoV of 35° by 36°. In the peripheral FoV, there still exists some non-uniformity (Fig. 16e ) due to the out-coupling gap, which is an inherent feature of the flat-type out-couplers.

figure 16

a System configuration. b Diagram demonstrating how multiplexed PPGs resemble the behavior of a mirror. Photos showing c the system and d image. e Picture demonstrating effective system FoV. Adapted from c – e ref. 199 with permission from ITE

Finally, it is worth mentioning that metasurfaces are also promising to deliver achromatic gratings 200 , 201 for waveguide couplers ascribed to their versatile wavefront shaping capability. The mechanism of the achromatic gratings is similar to that of the achromatic lenses as previously discussed. However, the current development of achromatic metagratings is still in its infancy. Much effort is needed to improve the optical efficiency for in-coupling, control the higher diffraction orders for eliminating ghost images, and enable a large size design for EPE.

Generally, achromatic waveguide combiners exhibit a comparable FoV and eyebox with diffractive combiners, but with a higher efficiency. For a partial-mirror combiner, its combiner efficiency is around 650 nit/lm 197 (2D EPE). For a skew-mirror combiner, although the efficiency of multiplexed PPHOE is relatively low (~1.5%) 199 , the final combiner efficiency of the 1D EPE system is still high (>3000 nit/lm) due to multiple out-couplings.

Table 2 summarizes the performance of different AR combiners. When combing the luminous efficacy in Table 1 and the combiner efficiency in Table 2 , we can have a comprehensive estimate of the total luminance efficiency (nit/W) for different types of systems. Generally, Maxwellian-type combiners with pupil steering have the highest luminance efficiency when partnered with laser-based light engines like laser-backlit LCoS/DMD or MEM-LBS. Geometric optical combiners have well-balanced image performances, but to further shrink the system size remains a challenge. Diffractive waveguides have a relatively low combiner efficiency, which can be remedied by an efficient light engine like MEMS-LBS. Further development of coupler and EPE scheme would also improve the system efficiency and FoV. Achromatic waveguides have a decent combiner efficiency. The single-layer design also enables a smaller form factor. With advances in fabrication process, it may become a strong contender to presently widely used diffractive waveguides.

Conclusions and perspectives

VR and AR are endowed with a high expectation to revolutionize the way we interact with digital world. Accompanied with the expectation are the engineering challenges to squeeze a high-performance display system into a tightly packed module for daily wearing. Although the etendue conservation constitutes a great obstacle on the path, remarkable progresses with innovative optics and photonics continue to take place. Ultra-thin optical elements like PPHOEs and LCHOEs provide alternative solutions to traditional optics. Their unique features of multiplexing capability and polarization dependency further expand the possibility of novel wavefront modulations. At the same time, nanoscale-engineered metasurfaces/SRGs provide large design freedoms to achieve novel functions beyond conventional geometric optical devices. Newly emerged micro-LEDs open an opportunity for compact microdisplays with high peak brightness and good stability. Further advances on device engineering and manufacturing process are expected to boost the performance of metasurfaces/SRGs and micro-LEDs for AR and VR applications.

Data availability

All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors.

Cakmakci, O. & Rolland, J. Head-worn displays: a review. J. Disp. Technol. 2 , 199–216 (2006).

Article   ADS   Google Scholar  

Zhan, T. et al. Augmented reality and virtual reality displays: perspectives and challenges. iScience 23 , 101397 (2020).

Rendon, A. A. et al. The effect of virtual reality gaming on dynamic balance in older adults. Age Ageing 41 , 549–552 (2012).

Article   Google Scholar  

Choi, S., Jung, K. & Noh, S. D. Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurrent Eng. 23 , 40–63 (2015).

Li, X. et al. A critical review of virtual and augmented reality (VR/AR) applications in construction safety. Autom. Constr. 86 , 150–162 (2018).

Kress, B. C. Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (Bellingham: SPIE Press, 2020).

Cholewiak, S. A. et al. A perceptual eyebox for near-eye displays. Opt. Express 28 , 38008–38028 (2020).

Lee, Y. H., Zhan, T. & Wu, S. T. Prospects and challenges in augmented reality displays. Virtual Real. Intell. Hardw. 1 , 10–20 (2019).

Kim, J. et al. Foveated AR: dynamically-foveated augmented reality display. ACM Trans. Graph. 38 , 99 (2019).

Tan, G. J. et al. Foveated imaging for near-eye displays. Opt. Express 26 , 25076–25085 (2018).

Lee, S. et al. Foveated near-eye display for mixed reality using liquid crystal photonics. Sci. Rep. 10 , 16127 (2020).

Yoo, C. et al. Foveated display system based on a doublet geometric phase lens. Opt. Express 28 , 23690–23702 (2020).

Akşit, K. et al. Manufacturing application-driven foveated near-eye displays. IEEE Trans. Vis. Computer Graph. 25 , 1928–1939 (2019).

Zhu, R. D. et al. High-ambient-contrast augmented reality with a tunable transmittance liquid crystal film and a functional reflective polarizer. J. Soc. Inf. Disp. 24 , 229–233 (2016).

Lincoln, P. et al. Scene-adaptive high dynamic range display for low latency augmented reality. In Proc. 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games . (ACM, San Francisco, CA, 2017).

Duerr, F. & Thienpont, H. Freeform imaging systems: fermat’s principle unlocks “first time right” design. Light.: Sci. Appl. 10 , 95 (2021).

Bauer, A., Schiesser, E. M. & Rolland, J. P. Starting geometry creation and design method for freeform optics. Nat. Commun. 9 , 1756 (2018).

Rolland, J. P. et al. Freeform optics for imaging. Optica 8 , 161–176 (2021).

Jang, C. et al. Design and fabrication of freeform holographic optical elements. ACM Trans. Graph. 39 , 184 (2020).

Gabor, D. A new microscopic principle. Nature 161 , 777–778 (1948).

Kostuk, R. K. Holography: Principles and Applications (Boca Raton: CRC Press, 2019).

Lawrence, J. R., O'Neill, F. T. & Sheridan, J. T. Photopolymer holographic recording material. Optik 112 , 449–463 (2001).

Guo, J. X., Gleeson, M. R. & Sheridan, J. T. A review of the optimisation of photopolymer materials for holographic data storage. Phys. Res. Int. 2012 , 803439 (2012).

Jang, C. et al. Recent progress in see-through three-dimensional displays using holographic optical elements [Invited]. Appl. Opt. 55 , A71–A85 (2016).

Xiong, J. H. et al. Holographic optical elements for augmented reality: principles, present status, and future perspectives. Adv. Photonics Res. 2 , 2000049 (2021).

Tabiryan, N. V. et al. Advances in transparent planar optics: enabling large aperture, ultrathin lenses. Adv. Optical Mater. 9 , 2001692 (2021).

Zanutta, A. et al. Photopolymeric films with highly tunable refractive index modulation for high precision diffractive optics. Optical Mater. Express 6 , 252–263 (2016).

Moharam, M. G. & Gaylord, T. K. Rigorous coupled-wave analysis of planar-grating diffraction. J. Optical Soc. Am. 71 , 811–818 (1981).

Xiong, J. H. & Wu, S. T. Rigorous coupled-wave analysis of liquid crystal polarization gratings. Opt. Express 28 , 35960–35971 (2020).

Xie, S., Natansohn, A. & Rochon, P. Recent developments in aromatic azo polymers research. Chem. Mater. 5 , 403–411 (1993).

Shishido, A. Rewritable holograms based on azobenzene-containing liquid-crystalline polymers. Polym. J. 42 , 525–533 (2010).

Bunning, T. J. et al. Holographic polymer-dispersed liquid crystals (H-PDLCs). Annu. Rev. Mater. Sci. 30 , 83–115 (2000).

Liu, Y. J. & Sun, X. W. Holographic polymer-dispersed liquid crystals: materials, formation, and applications. Adv. Optoelectron. 2008 , 684349 (2008).

Xiong, J. H. & Wu, S. T. Planar liquid crystal polarization optics for augmented reality and virtual reality: from fundamentals to applications. eLight 1 , 3 (2021).

Yaroshchuk, O. & Reznikov, Y. Photoalignment of liquid crystals: basics and current trends. J. Mater. Chem. 22 , 286–300 (2012).

Sarkissian, H. et al. Periodically aligned liquid crystal: potential application for projection displays. Mol. Cryst. Liq. Cryst. 451 , 1–19 (2006).

Komanduri, R. K. & Escuti, M. J. Elastic continuum analysis of the liquid crystal polarization grating. Phys. Rev. E 76 , 021701 (2007).

Kobashi, J., Yoshida, H. & Ozaki, M. Planar optics with patterned chiral liquid crystals. Nat. Photonics 10 , 389–392 (2016).

Lee, Y. H., Yin, K. & Wu, S. T. Reflective polarization volume gratings for high efficiency waveguide-coupling augmented reality displays. Opt. Express 25 , 27008–27014 (2017).

Lee, Y. H., He, Z. Q. & Wu, S. T. Optical properties of reflective liquid crystal polarization volume gratings. J. Optical Soc. Am. B 36 , D9–D12 (2019).

Xiong, J. H., Chen, R. & Wu, S. T. Device simulation of liquid crystal polarization gratings. Opt. Express 27 , 18102–18112 (2019).

Czapla, A. et al. Long-period fiber gratings with low-birefringence liquid crystal. Mol. Cryst. Liq. Cryst. 502 , 65–76 (2009).

Dąbrowski, R., Kula, P. & Herman, J. High birefringence liquid crystals. Crystals 3 , 443–482 (2013).

Mack, C. Fundamental Principles of Optical Lithography: The Science of Microfabrication (Chichester: John Wiley & Sons, 2007).

Genevet, P. et al. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica 4 , 139–152 (2017).

Guo, L. J. Nanoimprint lithography: methods and material requirements. Adv. Mater. 19 , 495–513 (2007).

Park, J. et al. Electrically driven mid-submicrometre pixelation of InGaN micro-light-emitting diode displays for augmented-reality glasses. Nat. Photonics 15 , 449–455 (2021).

Khorasaninejad, M. et al. Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 352 , 1190–1194 (2016).

Li, S. Q. et al. Phase-only transmissive spatial light modulator based on tunable dielectric metasurface. Science 364 , 1087–1090 (2019).

Liang, K. L. et al. Advances in color-converted micro-LED arrays. Jpn. J. Appl. Phys. 60 , SA0802 (2020).

Jin, S. X. et al. GaN microdisk light emitting diodes. Appl. Phys. Lett. 76 , 631–633 (2000).

Day, J. et al. Full-scale self-emissive blue and green microdisplays based on GaN micro-LED arrays. In Proc. SPIE 8268, Quantum Sensing and Nanophotonic Devices IX (SPIE, San Francisco, California, United States, 2012).

Huang, Y. G. et al. Mini-LED, micro-LED and OLED displays: present status and future perspectives. Light.: Sci. Appl. 9 , 105 (2020).

Parbrook, P. J. et al. Micro-light emitting diode: from chips to applications. Laser Photonics Rev. 15 , 2000133 (2021).

Day, J. et al. III-Nitride full-scale high-resolution microdisplays. Appl. Phys. Lett. 99 , 031116 (2011).

Liu, Z. J. et al. 360 PPI flip-chip mounted active matrix addressable light emitting diode on silicon (LEDoS) micro-displays. J. Disp. Technol. 9 , 678–682 (2013).

Zhang, L. et al. Wafer-scale monolithic hybrid integration of Si-based IC and III–V epi-layers—A mass manufacturable approach for active matrix micro-LED micro-displays. J. Soc. Inf. Disp. 26 , 137–145 (2018).

Tian, P. F. et al. Size-dependent efficiency and efficiency droop of blue InGaN micro-light emitting diodes. Appl. Phys. Lett. 101 , 231110 (2012).

Olivier, F. et al. Shockley-Read-Hall and Auger non-radiative recombination in GaN based LEDs: a size effect study. Appl. Phys. Lett. 111 , 022104 (2017).

Konoplev, S. S., Bulashevich, K. A. & Karpov, S. Y. From large-size to micro-LEDs: scaling trends revealed by modeling. Phys. Status Solidi (A) 215 , 1700508 (2018).

Li, L. Z. et al. Transfer-printed, tandem microscale light-emitting diodes for full-color displays. Proc. Natl Acad. Sci. USA 118 , e2023436118 (2021).

Oh, J. T. et al. Light output performance of red AlGaInP-based light emitting diodes with different chip geometries and structures. Opt. Express 26 , 11194–11200 (2018).

Shen, Y. C. et al. Auger recombination in InGaN measured by photoluminescence. Appl. Phys. Lett. 91 , 141101 (2007).

Wong, M. S. et al. High efficiency of III-nitride micro-light-emitting diodes by sidewall passivation using atomic layer deposition. Opt. Express 26 , 21324–21331 (2018).

Han, S. C. et al. AlGaInP-based Micro-LED array with enhanced optoelectrical properties. Optical Mater. 114 , 110860 (2021).

Wong, M. S. et al. Size-independent peak efficiency of III-nitride micro-light-emitting-diodes using chemical treatment and sidewall passivation. Appl. Phys. Express 12 , 097004 (2019).

Ley, R. T. et al. Revealing the importance of light extraction efficiency in InGaN/GaN microLEDs via chemical treatment and dielectric passivation. Appl. Phys. Lett. 116 , 251104 (2020).

Moon, S. W. et al. Recent progress on ultrathin metalenses for flat optics. iScience 23 , 101877 (2020).

Arbabi, A. et al. Efficient dielectric metasurface collimating lenses for mid-infrared quantum cascade lasers. Opt. Express 23 , 33310–33317 (2015).

Yu, N. F. et al. Light propagation with phase discontinuities: generalized laws of reflection and refraction. Science 334 , 333–337 (2011).

Liang, H. W. et al. High performance metalenses: numerical aperture, aberrations, chromaticity, and trade-offs. Optica 6 , 1461–1470 (2019).

Park, J. S. et al. All-glass, large metalens at visible wavelength using deep-ultraviolet projection lithography. Nano Lett. 19 , 8673–8682 (2019).

Yoon, G. et al. Single-step manufacturing of hierarchical dielectric metalens in the visible. Nat. Commun. 11 , 2268 (2020).

Lee, G. Y. et al. Metasurface eyepiece for augmented reality. Nat. Commun. 9 , 4562 (2018).

Chen, W. T. et al. A broadband achromatic metalens for focusing and imaging in the visible. Nat. Nanotechnol. 13 , 220–226 (2018).

Wang, S. M. et al. A broadband achromatic metalens in the visible. Nat. Nanotechnol. 13 , 227–232 (2018).

Lan, S. F. et al. Metasurfaces for near-eye augmented reality. ACS Photonics 6 , 864–870 (2019).

Fan, Z. B. et al. A broadband achromatic metalens array for integral imaging in the visible. Light.: Sci. Appl. 8 , 67 (2019).

Shi, Z. J., Chen, W. T. & Capasso, F. Wide field-of-view waveguide displays enabled by polarization-dependent metagratings. In Proc. SPIE 10676, Digital Optics for Immersive Displays (SPIE, Strasbourg, France, 2018).

Hong, C. C., Colburn, S. & Majumdar, A. Flat metaform near-eye visor. Appl. Opt. 56 , 8822–8827 (2017).

Bayati, E. et al. Design of achromatic augmented reality visors based on composite metasurfaces. Appl. Opt. 60 , 844–850 (2021).

Nikolov, D. K. et al. Metaform optics: bridging nanophotonics and freeform optics. Sci. Adv. 7 , eabe5112 (2021).

Tamir, T. & Peng, S. T. Analysis and design of grating couplers. Appl. Phys. 14 , 235–254 (1977).

Miller, J. M. et al. Design and fabrication of binary slanted surface-relief gratings for a planar optical interconnection. Appl. Opt. 36 , 5717–5727 (1997).

Levola, T. & Laakkonen, P. Replicated slanted gratings with a high refractive index material for in and outcoupling of light. Opt. Express 15 , 2067–2074 (2007).

Shrestha, S. et al. Broadband achromatic dielectric metalenses. Light.: Sci. Appl. 7 , 85 (2018).

Li, Z. Y. et al. Meta-optics achieves RGB-achromatic focusing for virtual reality. Sci. Adv. 7 , eabe4458 (2021).

Ratcliff, J. et al. ThinVR: heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE Trans. Vis. Computer Graph. 26 , 1981–1990 (2020).

Wong, T. L. et al. Folded optics with birefringent reflective polarizers. In Proc. SPIE 10335, Digital Optical Technologies 2017 (SPIE, Munich, Germany, 2017).

Li, Y. N. Q. et al. Broadband cholesteric liquid crystal lens for chromatic aberration correction in catadioptric virtual reality optics. Opt. Express 29 , 6011–6020 (2021).

Bang, K. et al. Lenslet VR: thin, flat and wide-FOV virtual reality display using fresnel lens and lenslet array. IEEE Trans. Vis. Computer Graph. 27 , 2545–2554 (2021).

Maimone, A. & Wang, J. R. Holographic optics for thin and lightweight virtual reality. ACM Trans. Graph. 39 , 67 (2020).

Kramida, G. Resolving the vergence-accommodation conflict in head-mounted displays. IEEE Trans. Vis. Computer Graph. 22 , 1912–1931 (2016).

Zhan, T. et al. Multifocal displays: review and prospect. PhotoniX 1 , 10 (2020).

Shimobaba, T., Kakue, T. & Ito, T. Review of fast algorithms and hardware implementations on computer holography. IEEE Trans. Ind. Inform. 12 , 1611–1622 (2016).

Xiao, X. et al. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt. 52 , 546–560 (2013).

Kuiper, S. & Hendriks, B. H. W. Variable-focus liquid lens for miniature cameras. Appl. Phys. Lett. 85 , 1128–1130 (2004).

Liu, S. & Hua, H. Time-multiplexed dual-focal plane head-mounted display with a liquid lens. Opt. Lett. 34 , 1642–1644 (2009).

Wilson, A. & Hua, H. Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses. Opt. Express 27 , 15627–15637 (2019).

Zhan, T. et al. Pancharatnam-Berry optical elements for head-up and near-eye displays [Invited]. J. Optical Soc. Am. B 36 , D52–D65 (2019).

Oh, C. & Escuti, M. J. Achromatic diffraction from polarization gratings with high efficiency. Opt. Lett. 33 , 2287–2289 (2008).

Zou, J. Y. et al. Broadband wide-view Pancharatnam-Berry phase deflector. Opt. Express 28 , 4921–4927 (2020).

Zhan, T., Lee, Y. H. & Wu, S. T. High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses. Opt. Express 26 , 4863–4872 (2018).

Tan, G. J. et al. Polarization-multiplexed multiplane display. Opt. Lett. 43 , 5651–5654 (2018).

Lanman, D. R. Display systems research at facebook reality labs (conference presentation). In Proc. SPIE 11310, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) (SPIE, San Francisco, California, United States, 2020).

Liu, Z. J. et al. A novel BLU-free full-color LED projector using LED on silicon micro-displays. IEEE Photonics Technol. Lett. 25 , 2267–2270 (2013).

Han, H. V. et al. Resonant-enhanced full-color emission of quantum-dot-based micro LED display technology. Opt. Express 23 , 32504–32515 (2015).

Lin, H. Y. et al. Optical cross-talk reduction in a quantum-dot-based full-color micro-light-emitting-diode display by a lithographic-fabricated photoresist mold. Photonics Res. 5 , 411–416 (2017).

Liu, Z. J. et al. Micro-light-emitting diodes with quantum dots in display technology. Light.: Sci. Appl. 9 , 83 (2020).

Kim, H. M. et al. Ten micrometer pixel, quantum dots color conversion layer for high resolution and full color active matrix micro-LED display. J. Soc. Inf. Disp. 27 , 347–353 (2019).

Xuan, T. T. et al. Inkjet-printed quantum dot color conversion films for high-resolution and full-color micro light-emitting diode displays. J. Phys. Chem. Lett. 11 , 5184–5191 (2020).

Chen, S. W. H. et al. Full-color monolithic hybrid quantum dot nanoring micro light-emitting diodes with improved efficiency using atomic layer deposition and nonradiative resonant energy transfer. Photonics Res. 7 , 416–422 (2019).

Krishnan, C. et al. Hybrid photonic crystal light-emitting diode renders 123% color conversion effective quantum yield. Optica 3 , 503–509 (2016).

Kang, J. H. et al. RGB arrays for micro-light-emitting diode applications using nanoporous GaN embedded with quantum dots. ACS Applied Mater. Interfaces 12 , 30890–30895 (2020).

Chen, G. S. et al. Monolithic red/green/blue micro-LEDs with HBR and DBR structures. IEEE Photonics Technol. Lett. 30 , 262–265 (2018).

Hsiang, E. L. et al. Enhancing the efficiency of color conversion micro-LED display with a patterned cholesteric liquid crystal polymer film. Nanomaterials 10 , 2430 (2020).

Kang, C. M. et al. Hybrid full-color inorganic light-emitting diodes integrated on a single wafer using selective area growth and adhesive bonding. ACS Photonics 5 , 4413–4422 (2018).

Geum, D. M. et al. Strategy toward the fabrication of ultrahigh-resolution micro-LED displays by bonding-interface-engineered vertical stacking and surface passivation. Nanoscale 11 , 23139–23148 (2019).

Ra, Y. H. et al. Full-color single nanowire pixels for projection displays. Nano Lett. 16 , 4608–4615 (2016).

Motoyama, Y. et al. High-efficiency OLED microdisplay with microlens array. J. Soc. Inf. Disp. 27 , 354–360 (2019).

Fujii, T. et al. 4032 ppi High-resolution OLED microdisplay. J. Soc. Inf. Disp. 26 , 178–186 (2018).

Hamer, J. et al. High-performance OLED microdisplays made with multi-stack OLED formulations on CMOS backplanes. In Proc. SPIE 11473, Organic and Hybrid Light Emitting Materials and Devices XXIV . Online Only (SPIE, 2020).

Joo, W. J. et al. Metasurface-driven OLED displays beyond 10,000 pixels per inch. Science 370 , 459–463 (2020).

Vettese, D. Liquid crystal on silicon. Nat. Photonics 4 , 752–754 (2010).

Zhang, Z. C., You, Z. & Chu, D. P. Fundamentals of phase-only liquid crystal on silicon (LCOS) devices. Light.: Sci. Appl. 3 , e213 (2014).

Hornbeck, L. J. The DMD TM projection display chip: a MEMS-based technology. MRS Bull. 26 , 325–327 (2001).

Zhang, Q. et al. Polarization recycling method for light-pipe-based optical engine. Appl. Opt. 52 , 8827–8833 (2013).

Hofmann, U., Janes, J. & Quenzer, H. J. High-Q MEMS resonators for laser beam scanning displays. Micromachines 3 , 509–528 (2012).

Holmström, S. T. S., Baran, U. & Urey, H. MEMS laser scanners: a review. J. Microelectromechanical Syst. 23 , 259–275 (2014).

Bao, X. Z. et al. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices. Opt. Laser Technol. 78 , 34–41 (2016).

Olivier, F. et al. Influence of size-reduction on the performances of GaN-based micro-LEDs for display application. J. Lumin. 191 , 112–116 (2017).

Liu, Y. B. et al. High-brightness InGaN/GaN Micro-LEDs with secondary peak effect for displays. IEEE Electron Device Lett. 41 , 1380–1383 (2020).

Qi, L. H. et al. 848 ppi high-brightness active-matrix micro-LED micro-display using GaN-on-Si epi-wafers towards mass production. Opt. Express 29 , 10580–10591 (2021).

Chen, E. G. & Yu, F. H. Design of an elliptic spot illumination system in LED-based color filter-liquid-crystal-on-silicon pico projectors for mobile embedded projection. Appl. Opt. 51 , 3162–3170 (2012).

Darmon, D., McNeil, J. R. & Handschy, M. A. 70.1: LED-illuminated pico projector architectures. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 39 , 1070–1073 (2008).

Essaian, S. & Khaydarov, J. State of the art of compact green lasers for mobile projectors. Optical Rev. 19 , 400–404 (2012).

Sun, W. S. et al. Compact LED projector design with high uniformity and efficiency. Appl. Opt. 53 , H227–H232 (2014).

Sun, W. S., Chiang, Y. C. & Tsuei, C. H. Optical design for the DLP pocket projector using LED light source. Phys. Procedia 19 , 301–307 (2011).

Chen, S. W. H. et al. High-bandwidth green semipolar (20–21) InGaN/GaN micro light-emitting diodes for visible light communication. ACS Photonics 7 , 2228–2235 (2020).

Yoshida, K. et al. 245 MHz bandwidth organic light-emitting diodes used in a gigabit optical wireless data link. Nat. Commun. 11 , 1171 (2020).

Park, D. W. et al. 53.5: High-speed AMOLED pixel circuit and driving scheme. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 41 , 806–809 (2010).

Tan, L., Huang, H. C. & Kwok, H. S. 78.1: Ultra compact polarization recycling system for white light LED based pico-projection system. Soc. Inf. Disp. Int. Symp. Dig. Tech. Pap. 41 , 1159–1161 (2010).

Maimone, A., Georgiou, A. & Kollin, J. S. Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. 36 , 85 (2017).

Pan, J. W. et al. Portable digital micromirror device projector using a prism. Appl. Opt. 46 , 5097–5102 (2007).

Huang, Y. et al. Liquid-crystal-on-silicon for augmented reality displays. Appl. Sci. 8 , 2366 (2018).

Peng, F. L. et al. Analytical equation for the motion picture response time of display devices. J. Appl. Phys. 121 , 023108 (2017).

Pulli, K. 11-2: invited paper: meta 2: immersive optical-see-through augmented reality. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 48 , 132–133 (2017).

Lee, B. & Jo, Y. in Advanced Display Technology: Next Generation Self-Emitting Displays (eds Kang, B., Han, C. W. & Jeong, J. K.) 307–328 (Springer, 2021).

Cheng, D. W. et al. Design of an optical see-through head-mounted display with a low f -number and large field of view using a freeform prism. Appl. Opt. 48 , 2655–2668 (2009).

Zheng, Z. R. et al. Design and fabrication of an off-axis see-through head-mounted display with an x–y polynomial surface. Appl. Opt. 49 , 3661–3668 (2010).

Wei, L. D. et al. Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface. Opt. Express 26 , 8550–8565 (2018).

Liu, S., Hua, H. & Cheng, D. W. A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans. Vis. Computer Graph. 16 , 381–393 (2010).

Hua, H. & Javidi, B. A 3D integral imaging optical see-through head-mounted display. Opt. Express 22 , 13484–13491 (2014).

Song, W. T. et al. Design of a light-field near-eye display using random pinholes. Opt. Express 27 , 23763–23774 (2019).

Wang, X. & Hua, H. Depth-enhanced head-mounted light field displays based on integral imaging. Opt. Lett. 46 , 985–988 (2021).

Huang, H. K. & Hua, H. Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays. Opt. Express 27 , 25154–25171 (2019).

Huang, H. K. & Hua, H. High-performance integral-imaging-based light field augmented reality display using freeform optics. Opt. Express 26 , 17578–17590 (2018).

Cheng, D. W. et al. Design and manufacture AR head-mounted displays: a review and outlook. Light.: Adv. Manuf. 2 , 24 (2021).

Google Scholar  

Westheimer, G. The Maxwellian view. Vis. Res. 6 , 669–682 (1966).

Do, H., Kim, Y. M. & Min, S. W. Focus-free head-mounted display based on Maxwellian view using retroreflector film. Appl. Opt. 58 , 2882–2889 (2019).

Park, J. H. & Kim, S. B. Optical see-through holographic near-eye-display with eyebox steering and depth of field control. Opt. Express 26 , 27076–27088 (2018).

Chang, C. L. et al. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. Optica 7 , 1563–1578 (2020).

Hsueh, C. K. & Sawchuk, A. A. Computer-generated double-phase holograms. Appl. Opt. 17 , 3874–3883 (1978).

Chakravarthula, P. et al. Wirtinger holography for near-eye displays. ACM Trans. Graph. 38 , 213 (2019).

Peng, Y. F. et al. Neural holography with camera-in-the-loop training. ACM Trans. Graph. 39 , 185 (2020).

Shi, L. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591 , 234–239 (2021).

Jang, C. et al. Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina. ACM Trans. Graph. 36 , 190 (2017).

Jang, C. et al. Holographic near-eye display with expanded eye-box. ACM Trans. Graph. 37 , 195 (2018).

Kim, S. B. & Park, J. H. Optical see-through Maxwellian near-to-eye display with an enlarged eyebox. Opt. Lett. 43 , 767–770 (2018).

Shrestha, P. K. et al. Accommodation-free head mounted display with comfortable 3D perception and an enlarged eye-box. Research 2019 , 9273723 (2019).

Lin, T. G. et al. Maxwellian near-eye display with an expanded eyebox. Opt. Express 28 , 38616–38625 (2020).

Jo, Y. et al. Eye-box extended retinal projection type near-eye display with multiple independent viewpoints [Invited]. Appl. Opt. 60 , A268–A276 (2021).

Xiong, J. H. et al. Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses. Opt. Lett. 46 , 1760–1763 (2021).

Viirre, E. et al. Laser safety analysis of a retinal scanning display system. J. Laser Appl. 9 , 253–260 (1997).

Ratnam, K. et al. Retinal image quality in near-eye pupil-steered systems. Opt. Express 27 , 38289–38311 (2019).

Maimone, A. et al. Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources. In Proc. ACM SIGGRAPH 2014 Emerging Technologies (ACM, Vancouver, Canada, 2014).

Jeong, J. et al. Holographically printed freeform mirror array for augmented reality near-eye display. IEEE Photonics Technol. Lett. 32 , 991–994 (2020).

Ha, J. & Kim, J. Augmented reality optics system with pin mirror. US Patent 10,989,922 (2021).

Park, S. G. Augmented and mixed reality optical see-through combiners based on plastic optics. Inf. Disp. 37 , 6–11 (2021).

Xiong, J. H. et al. Breaking the field-of-view limit in augmented reality with a scanning waveguide display. OSA Contin. 3 , 2730–2740 (2020).

Levola, T. 7.1: invited paper: novel diffractive optical components for near to eye displays. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 37 , 64–67 (2006).

Laakkonen, P. et al. High efficiency diffractive incouplers for light guides. In Proc. SPIE 6896, Integrated Optics: Devices, Materials, and Technologies XII . (SPIE, San Jose, California, United States, 2008).

Bai, B. F. et al. Optimization of nonbinary slanted surface-relief gratings as high-efficiency broadband couplers for light guides. Appl. Opt. 49 , 5454–5464 (2010).

Äyräs, P., Saarikko, P. & Levola, T. Exit pupil expander with a large field of view based on diffractive optics. J. Soc. Inf. Disp. 17 , 659–664 (2009).

Yoshida, T. et al. A plastic holographic waveguide combiner for light-weight and highly-transparent augmented reality glasses. J. Soc. Inf. Disp. 26 , 280–286 (2018).

Yu, C. et al. Highly efficient waveguide display with space-variant volume holographic gratings. Appl. Opt. 56 , 9390–9397 (2017).

Shi, X. L. et al. Design of a compact waveguide eyeglass with high efficiency by joining freeform surfaces and volume holographic gratings. J. Optical Soc. Am. A 38 , A19–A26 (2021).

Han, J. et al. Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms. Opt. Express 23 , 3534–3549 (2015).

Weng, Y. S. et al. Liquid-crystal-based polarization volume grating applied for full-color waveguide displays. Opt. Lett. 43 , 5773–5776 (2018).

Lee, Y. H. et al. Compact see-through near-eye display with depth adaption. J. Soc. Inf. Disp. 26 , 64–70 (2018).

Tekolste, R. D. & Liu, V. K. Outcoupling grating for augmented reality system. US Patent 10,073,267 (2018).

Grey, D. & Talukdar, S. Exit pupil expanding diffractive optical waveguiding device. US Patent 10,073, 267 (2019).

Yoo, C. et al. Extended-viewing-angle waveguide near-eye display with a polarization-dependent steering combiner. Opt. Lett. 45 , 2870–2873 (2020).

Schowengerdt, B. T., Lin, D. & St. Hilaire, P. Multi-layer diffractive eyepiece with wavelength-selective reflector. US Patent 10,725,223 (2020).

Wang, Q. W. et al. Stray light and tolerance analysis of an ultrathin waveguide display. Appl. Opt. 54 , 8354–8362 (2015).

Wang, Q. W. et al. Design of an ultra-thin, wide-angle, stray-light-free near-eye display with a dual-layer geometrical waveguide. Opt. Express 28 , 35376–35394 (2020).

Frommer, A. Lumus: maximus: large FoV near to eye display for consumer AR glasses. In Proc. SPIE 11764, AVR21 Industry Talks II . Online Only (SPIE, 2021).

Ayres, M. R. et al. Skew mirrors, methods of use, and methods of manufacture. US Patent 10,180,520 (2019).

Utsugi, T. et al. Volume holographic waveguide using multiplex recording for head-mounted display. ITE Trans. Media Technol. Appl. 8 , 238–244 (2020).

Aieta, F. et al. Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347 , 1342–1345 (2015).

Arbabi, E. et al. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 4 , 625–632 (2017).

Download references

Acknowledgements

The authors are indebted to Goertek Electronics for the financial support and Guanjun Tan for helpful discussions.

Author information

Authors and affiliations.

College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA

Jianghao Xiong, En-Lin Hsiang, Ziqian He, Tao Zhan & Shin-Tson Wu

You can also search for this author in PubMed   Google Scholar

Contributions

J.X. conceived the idea and initiated the project. J.X. mainly wrote the manuscript and produced the figures. E.-L.H., Z.H., and T.Z. contributed to parts of the manuscript. S.W. supervised the project and edited the manuscript.

Corresponding author

Correspondence to Shin-Tson Wu .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xiong, J., Hsiang, EL., He, Z. et al. Augmented reality and virtual reality displays: emerging technologies and future perspectives. Light Sci Appl 10 , 216 (2021). https://doi.org/10.1038/s41377-021-00658-8

Download citation

Received : 06 June 2021

Revised : 26 September 2021

Accepted : 04 October 2021

Published : 25 October 2021

DOI : https://doi.org/10.1038/s41377-021-00658-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Color liquid crystal grating based color holographic 3d display system with large viewing angle.

  • Qiong-Hua Wang

Light: Science & Applications (2024)

Mass-produced and uniformly luminescent photochromic fibers toward future interactive wearable displays

Enhancing the color gamut of waveguide displays for augmented reality head-mounted displays through spatially modulated diffraction grating.

  • Jae-Sang Lee
  • Seong-Hyeon Cho
  • Young-Wan Choi

Scientific Reports (2024)

Effects of virtual reality exposure therapy on state-trait anxiety in individuals with dentophobia

  • Elham Majidi
  • Gholamreza Manshaee

Current Psychology (2024)

A review of convolutional neural networks in computer vision

  • Milan Parmar

Artificial Intelligence Review (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

essay on virtual reality technology

Virtual Reality - Essay Samples And Topic Ideas For Free

Virtual Reality (VR), a simulated experience that can resemble or be entirely different from the real world, has made significant strides with applications in gaming, education, healthcare, and more. Essays on VR might delve into its technological advancements, its applications, and the societal, ethical, and psychological implications of immersive digital environments. The discussion could also extend to the comparison between VR and augmented reality (AR), exploring how these technologies are reshaping entertainment, communication, and learning experiences. A vast selection of complimentary essay illustrations pertaining to Virtual Reality you can find at PapersOwl Website. You can use our samples for inspiration to write your own essay, research paper, or just to explore a new topic for yourself.

Virtual Reality in the Medical Field

Before I began researching virtual reality (VR), augmented reality (AR) and mixed reality (MR), I knew very little on the subjects of each and even had to look up the definitions. I was aware that Google was working to produce glasses called, Google Cardboard, Sony was also working to produce their own version called, PlayStation VR. I looked at multiple definitions of each of the three realities, virtual, augmented and mixed and have compiled all the information into an easily […]

Virtual Reality (VR) is not a New Technology

Virtual reality can be portrayed as an Immersive Mixed media innovation (Krau, 2016). Today, Virtual reality (VR) is not a new technology (Barnes, 2016). Initial computerized VR started within the late-1960s (VRS, 2016). According to the Oxford English Dictionary, virtual reality alludes to "The computer created simulation of a three-dimensional image or environment that can be associating with in an apparently genuine or physical way by an individual utilizing specific electronic equipment, such as a helmet with a interior screen […]

Future of Video Games

In many centuries, technology has been a big contributor to human history. It has helped humans advance in many different areas of life. It has provided us with the abilities to advance the human race, and gain more knowledge than our previous ancestors. Technology over the years has advanced rapidly. Not that long ago, the very first cell phone was an extraordinary invention that caught the world by storm. It helped talking with people from long distances remotely seem like […]

We will write an essay sample crafted to your needs.

Virtual Reality (VR)

Virtual reality has enhance life in all aspects by allowing your senses to feel what your body cannot experience; it allows you to travel, learn, and has a bright future ahead of it. Even though it has experienced obstacles, it is an emerging technology at best. Therefore, what is Virtual reality "Virtual reality is the term used to describe a three-dimensional, computer generated environment which can be explored and interacted with by a person. That person becomes part of this […]

Virtual Reality (VR) Today

Virtual reality (VR) and, to some extent, augmented reality (AR) have been a science fiction dream for many years, possibly going back as far as the 1950s; However, over the past ten to twenty years, these conceptual ideas have made their way into reality and are slowly starting to integrate into society and daily life, also known as "emerging technologies". According to Reede and Bailiff (2016), VR startups have raised more than $1.46 billion in venture capital since 2012, with […]

Specific Fictional Model for Virtual Realit

On November 12, 2018, the Oculus Blog posted, "You haven't seen it until you see it with VR." Even though the public became aware of virtual reality only recently, the concept has been around for decades. It took many years and attempts to reach to the perfection of Virtual Reality Oculus. Technology has evolved, and many inventors have tried to create something that helps viewers feel present at some event or scene. Virtual Reality is a computer invention that tries […]

History of Virtual Reality

Historically, virtual reality in its beginning preceded time through the concept that has been developed and formalized. Every development with VR has contributed to the creation of illusion. Dating back to the nineteenth-century virtual reality is presented in the 360-degree murals intended to fill the entire sight of a person. In top galleries, this modern art has occupied much of the exhibits spaces and is continuing to expand. Virtual reality has branched from pen to paper and paintbrush to canvas […]

A Computer-Based Technology: Virtual Reality

Since human walked into the Information Age, we have seen masses of productive results brought by the Internet and computer, like multimedia and cyberspace, which both are the essential parts of the life of ordinary people. Now it comes to the 18th year of 21st century, with the popularization of smart phone and personal computer, the contents presented on the gleamy screens gradually lose their attraction to people as they did, for at a time where funky things and eyeball-catching […]

The Computer-Generated Simulation Image or Environment – VR

Virtual reality is the computer-generated simulation image or environment that can be interacted with in a seemingly real or physical way by person. It is used for entertainment like video games,simulation, or to see something new. Many companies use virtual reality to sell products like sony,mircrosoft, etc.You can use it to train for a career. It can also be used for designing for example engineers can use it for designing a building or fair ride.It can be used by a […]

Virtual Reality and Identity

Virtual reality as a simulation of a real or imaginary phenomenon allows freedom for the individuals within the environment. The virtual reality has no defined gender roles and defies society's definition of gender and boundaries. This is illustrated in the films the matrix and her the characters exhibit a form of freedom and no clearly defined boundaries. Virtual reality allows the change of identity and total control of the identity of the character. This is displayed by trinity in the […]

Are Virtual Reality Becoming more a Part of our Reality than Before?

Video games have been a part of the world’s culture for the past five or so decades and have affected many people’s lives. Since video games were first released commercially, we have seen the rise of many iconic characters from these games like Mario and Sonic. Although video games seem to be something to play for fun, they are being used today for more than their original intent. Thanks to the gaming community, new technologies like Virtual Reality (VR) have […]

Vr’s Impact to Modern World

About 75 percent of the Forbes World’s Most Valuable Brands have created some form of the virtual reality or augmented reality experience for customers or employees. This must say something if you have companies such as Sony, Facebook(Oculus), and HTC. There’s obviously some potential in virtual reality if people are dedicating part of their companies to this material. The innovation of this technology is certainly amazing but what impact will it have on the future of technology or even businesses, […]

Smart Medicine and Virtual Reality – Use Cases

Virtual reality (VR) – the creation of immersive, computer-generated environments so convincing that they feel like the real thing -- isn’t just for video games and escapism. It is also changing the way that doctors work and greatly improving patients’ lives. Here are five examples of how VR is making medicine smarter. • Curing phobias and PTSD Facing your fears is the best way to overcome a phobia. But for people who are deathly afraid of spiders, needles, flying -- […]

Virtual Reality: Game Transfer Phenomena

Imagine if you were you were floating through space, watching a horror film,s or perhaps playing a video game, and it seemed like you were actually there. With the invention of virtual reality (VR), people are able to explore the illusion of this reality. Virtual reality is computer-generated technology used to create a manufactured environment. There is a range of systems that are used for this purpose such as special headsets and fiber optic gloves. The term virtual reality means […]

What is Virtual Reality? VR Definition and Examples

Virtual Reality (VR) is a powerful technology that has the potential to cause a multitude of social and psychological problems. VR is defined as a “computer-generated display that allows or compels the user to have a feeling of being present in an environment other than the one they are actually in and to interact with that environment (Schroeder, 2). VR creates a three-dimensional situation in which the user is able to fully immerse themself and interact with the environment. Through […]

Virtual Reality in Regards to Health and how it Can be Life-Changing

        Exploring Virtual Reality in Health Diego Leon Professor Ron Frazier October 29, 2018, Introduction When most individuals think of technology involving computers, they think it can solely involve two of the five senses we humans have – vision (sight) and hearing (audition). But what if we could interact with more than two sensorial channels? Virtual reality deals with just that. Virtual reality is defined as a “high-end user interface that involves real-time simulation and interaction through […]

Potential Impacts of VR

Introduction Commonly abbreviated as VR, Virtual Reality is an interactive computer-generated experience that takes place within a simulated environment or three-dimensional image (Burdea & Coiffet, 2003). The experience is generated by a blend of interactive software and hardware, and is then presented in a realistic fashion such that the user interacts with and accepts the simulated environment as if it were real. The immersive environment can either be real or artificial, and is typically produced in 3D modeling software before […]

Utilization of PC Innovation: Virtual Reality

Virtual Reality (VR) is the utilization of PC innovation to make a mimicked domain. In contrast to conventional UIs, VR places the client inside an ordeal. Rather than survey a screen before them, clients are submerged and ready to connect with 3D universes. By reenacting whatever number of faculties as could be allowed, for example, vision, hearing, contact, even smell, the PC is changed into a guard to this counterfeit world. As far as possible to close genuine VR encounters […]

Mobile Technology: Virtual Reality

Virtual reality Computer-generated reality or VR reality is the latest user interface opposite to traditional one, indulging person into the 3D environment instead of watching in on any screen, this also makes individuals feel like they are physically in that environment likewise they can touch, see and hear that scene in reality. This work based on tricking the human mind to make them realize what they are feeling that’s real. Virtual Reality can be viewed as a very vivid encounter […]

Virtual Reality and Multiple Sclerosis Experiment

    Multiple Sclerosis (MS) is a progressive disease of the central nervous system. According to the National Multiple Sclerosis Society, it is estimated that MS affects more than 2.3 million people worldwide.1 At this time the direct cause of MS is still unknown. However, the immune system attacks and damages the myelin sheath of nerve fibers, a fatty covering that surrounds and protects the nerve fibers. The immune system also attacks oligodendrocytes, which are the myelin-producing cells, as well […]

Virtual Reality Clan Generators: Building Digital Empires in a Virtual World

Ever fancied being the chief of your own virtual clan? Welcome to the world of clan generator games, where you're not just playing a game; you're building an empire, one decision at a time. These aren't your run-of-the-mill video games. They're a blend of strategy, storytelling, and, let's be honest, a bit of power tripping. Whether you're managing resources, diplomatically dealing with neighbors, or leading your digital tribe into battle, these games offer a slice of escapism with a side […]

Development of Virtual and Augmented Reality

Abstract This research paper is about virtual and augmented reality, it goes into detail about the history, the difference between the two, and how they're used in life today. Virtual reality was first experimented with in the 1950's, but Irvan Sutherland is credited for creating the first device dealing with both augmented and virtual reality in 1968. Virtual and augmented reality seem like they're similar, but the difference is that augmented reality is a bridge between the real world and […]

Augmented Reality Virtual Reality and the Music Industry

Although AR/VR technology is still in its infancy, it has already made quite the impact on most (if not all) industries including health care, retail, military/defense, Journalism media, & Architecture. One that especially sticks out to me is the AR/VR effects on Entertainment business, specifically the music industry. Each year hardware developers move us one step closer to a future where AR/VR is used as a common household item. Advances perhaps viewed as miniscule by the general public (i.e. simple […]

Augmented and Virtual Reality in a Business

Since the 1980's the technology to be able to remove oneself from this reality and place them into another simulated reality have been possible. Augmented and Virtual reality have been steadily gaining in popularity for the past 40 years. Looking back to where it was and to where it is today is amazing. According to Ryan Kaiser from Deloitte Consulting, Augmented Reality is a computer-generated image that is on the same field of view as the real world. While Virtual […]

Developing and Testing Photorealistic Avatar with Body Motions and Facial Expressions for Communication in Social Virtual Reality Applications

Developing and Testing Photorealistic Avatar with Body Motions and Facial Expressions for Communication in Social Virtual Reality Applications Abstract Providing effective communication in social virtual reality (VR) applications requires a high level of avatar representation realism and body movement to convey users’ thoughts and behaviours. In this research, we investigate the influence of avatar representation and behaviour on communication in an immersive virtual environment (IVE) by comparing video-based versus model-based avatar representations. Additionally, we introduce a novel VR communication system […]

Subway Surfers: Unraveling the Ultimate Endless Virtual Reality Adventure

Subway Surfers, a mobile gaming phenomenon meticulously crafted by Kiloo and SYBO Games, effortlessly stands out in the realm of endless runner games. This captivating and adrenaline-pumping game has carved an indelible niche for itself, firmly establishing its supremacy in the world of endless runners. In this essay, we will embark on a comprehensive exploration of Subway Surfers, delving into its gameplay dynamics, visual aesthetics, and the compelling reasons why it has become an essential choice for gamers seeking an […]

Technology in Modern Basketball

With basketball getting more and more popular, more people regret basketball as their favorite sport. But basketball has gone through a long period. The system and the level of coach had changed a lot. This made basketball have more different than nowadays. Today I want to introduce some obvious difference between modern basketball and traditional basketball. Firstly, game style had changed a lot, in the past, the players are more expected to shoot mid-range shoot. And different kinds of mid-range […]

BIM-VR Synchronization: Challenges and Solutions

There has been a steady increase in the adoption of BIM in the construction and engineering industry, and also in facility management in the past few decades. The next step is to create a framework that will allow BIM models to be translated into virtual reality models in real time. The current issues in developing virtual reality models are many, and need to be addressed. Some of the issues are that the process takes up a lot of time, and […]

Additional Example Essays

  • Gender Inequality in Education
  • PTSD in Veterans
  • Professionalism In Healthcare
  • Three Waves of Feminism
  • Fahrenheit 451 Technology
  • Two main strengths and weaknesses of international law
  • Why Movies Are Better Than Books: Advantages of Visual Storytelling
  • Benefit of Playing Video Games

Essay about Virtual Reality (VR) Virtual reality is a three-dimensional computer environment that interacts with a person: a person is immersed in this environment using various devices (helmets, glasses, etc.), is part of the virtual world, and controls virtual objects and objects. The idea of ​​immersing a person in the surrounding non-physical environment arose in the Middle Ages in the field of art. Then concave frescoes were created in order to involve a person in what is happening in the image. In the 1830s, the first stereoscopes were created, the principle of which was to place two pictures depicting the same situation from different positions in space in different eyepieces. Thus, one eye saw one picture, the other saw another, and the brain later combined them into a general three-dimensional picture. Nowadays, the same principle of obtaining a three-dimensional image is often used, only smartphones and LCD displays are used instead of pictures. After stereoscopes in the 1920s, the first flight simulators were invented, special devices that allow you to work out all actions when controlling an aircraft. Such simulators were mainly used by the military to train and improve the skills of military personnel. In 1982, the world's first laboratory dedicated to the research and development of virtual reality devices was established in the United States. During the first decade of the 21st century, virtual reality did not gain popularity, but since 2012 VR devices have been actively gaining popularity in the entertainment industry. In 2012, a virtual reality glasses startup Oculus VR was introduced on Kickstarter, which was later bought by Facebook. After the emerging demand for glasses, many IT companies, including Google, Apple, Amazon, Microsoft, Sony and Samsung, HTC, Sony and others, began to develop their own gadgets.

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

Study and Analysis of Virtual Reality and its Impact on the Current Era

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Home — Essay Samples — Information Science and Technology — Computer — Virtual Reality – the Technology of the Future

test_template

Virtual Reality - The Technology of The Future

  • Categories: Computer Virtual Reality

About this sample

close

Words: 2031 |

11 min read

Published: Nov 7, 2018

Words: 2031 | Pages: 4 | 11 min read

Table of contents

Four key elements of vr experience, other key concepts:, types of the problem that can benefit from vr, virtual world, sensory feedback, interactivity, telepresence, collaborative environment.

  • Immersive 3D presentation VR is suitable in scenarios in which an immersive 3D presentation or a 3D visualization of an object is more persuasive than a one- or two-dimensional format, such as in the cases of architectural walkthroughs, design spaces, virtual prototyping, scientific visualization, teaching and learning a subject in 3D. Insite VR allows architects to transform designs from major modeling software into three-dimensional VR environments, which they can then view in a life-like 3D image using certain VR headsets. This gives the architects a chance to "walk through" a design, as it were, and see how it would look when completed, so they can make changes. Insite VR also allows multiple VR users from remote locations to explore content together and collaborate virtually.
  • Exploration VR is a suitable delivery mechanism if the goal is to explore or familiarize oneself with a specific environment (either real or fictitious). Image any art student in Vietnam (my home country), or anywhere in the world, can have the opportunity to visit the Metropolitan Museum of Art in New York, or the Louvre Museum in Paris, by just putting a VR headset on. How fascinating is that! On the commercial side, VR can be an effective marketing and sales tool for the hospitality, tourism and real estate industries. A VR presentation or experience can provide customers a personalized and detailed tour of the resort, hotel, or an individual suite, which adds to the sense of customers being there and can have positive impacts on sales conversion.
  • Through a collaboration with VR firm Matterport, the New York Times now offers virtual reality tours of some of its luxury real estate listings. Simulation Types of problems that can benefit from simulations in VR: Problems that cannot be tackled in the physical world (e.g., witnessing the formation of the Earth)Problems that cannot be studied safely (e.g., witnessing an earthquake)Problems that require extensive practice to avoid costly mistakes in real life (e.g., football training, surgical practice)Problems that cannot be deployed due to cost constraints (e.g., car dealership showroom)Problems in "What if?" studies (where virtual exploration could lead to a better understanding).
  • Live & Real-life events
  • Social platforms & Virtual collaboration
  • EmpathyVR affects people on an emotional level much more than any other media. Because of its immersive properties, VR can give them not just a better sense of the places but also more empathy and a deeper emotional connection to the people that were actually there. It is a powerful tool for visual storytelling and simulation experiences to connect human beings to other human beings and to spread awareness and inspire action on pressing social issues, such as in the journalism, nonprofit and environmental industries.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr Jacklynne

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 1033 words

6 pages / 3163 words

6 pages / 2892 words

2 pages / 1050 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Virtual Reality - The Technology of The Future Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Computer

Internet of Things (IoT) is a system of integrated technology that authorizes interaction of distinctively connected computing devise which could be rooted with other interfaces like humans or machines, associated via wired and [...]

Computer technology has had a deep impact on the education sector. The following are the importance of ICT in education as retrieved from several journals and databases. 1. Computers are a brilliant aid in teaching. [...]

Fortnite has undoubtedly risen to become one of the most popular ongoing games. It has become a cult in itself and has been able to attract crowds from multiple age brackets into this super engrossing video game. The game was [...]

One of the primary input device used with a computer that looks similar to those found on electric type-writers is a computer keyboard, but with some additional keys. Keyboard allows you to input letter, number and other symbol [...]

A network operating system is software which enables the basic functions of a computer. The operating system provides an interface, often a graphical user interface. Network operating systems deal with users of the network [...]

Anna Kournikova (named by its author as "Vbs.OnTheFly Created By OnTheFly") was a computer worm written by a 20-year-old Dutch student named Jan de Wit who called himself 'OnTheFly' on February 11, 2001. It was designed to trick [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on virtual reality technology

  • Current Edition
  • Special Issues
  • 2023 Issues
  • 2022 Issues
  • 2021 Issues
  • 2020 Issues
  • TTS Author Information
  • Subscribe to TTS
  • Contact TTS
  • Current Issue
  • Past Issues
  • Ways to Subscribe
  • Information for Authors
  • Contact T&S Magazine
  • Find a Local SSIT Chapter
  • Student Chapter Resources
  • SSIT Donations
  • SSIT Strategic Plan
  • Distinguished Lecturers
  • Awards Programs
  • Volunteer Resources
  • Women in Engineering
  • Promotional Materials
  • Governance Documents
  • SSIT Records Archive
  • CSIT Newsletters
  • IEEE TechEthics
  • Board of Governors
  • Volunteer Directory
  • IEEE ISTAS Conference History
  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

ieee ssit technology and society magazine

Virtual Reality: Ethical Challenges and Dangers

By ben kenwright on january 14th, 2019 in editorial & opinion , ethics , magazine articles , social implications of technology , societal impact.

essay on virtual reality technology

Physiological and Social Impacts

According to Moore’s Law, there is a correlation between technological advancement and social and ethical impacts  [13]. Many advances, such as quantum computing  [22], 3D-printing  [11], flexible transparent screens  [1], and breakthroughs in machine learning and artificial intelligence  [17] have social impacts. One area that introduces a new dimension of ethical concerns is virtual reality (VR). VR continues to develop novel applications beyond simple entertainment, due to the increasing availability of VR technologies and the intense immersive experience. While the potential advantages of virtual reality are limitless, there has been much debate about the ethical complexities that this new technology presents  [9],  [19]. Potential ethical implications of VR include physiological and cognitive impacts and behavioral and social dynamics. Identifying and managing procedures to address emerging ethical issues will happen not only through regulations and laws (e.g., government and institutional approval), but also through ethics-in-practice (respect, care, morals, and education).

Including Ethics in the Design

Integrating ethics and moral sensitivity into design is referred to as “anticipatory technology ethics” by Brey [4] and “responsible research and innovation” by Sutcliffe [23]. These researchers emphasize the vital importance and responsibilities that designers have on technologies and their capacities, as well as designers’ moral obligations to the public. These obligations may include a wider long-term view, taking into account social involvement, environmental impacts, and other repercussions. Moral responsibilities related to technology have long been a subject of debate. For example, guidelines presented by Keith Miller [12] and other researchers on the topic of moral responsibilities emphasize that people who design, develop, and deploy a computing artifact (hardware or software) are accountable for that artifact, and for the foreseeable effects of that artifact.

Traditional moral responsibilities in the physical world do not necessarily translate to virtual worlds created by designers.

However, it is unclear how to predict the impact of virtual reality technologies (i.e., foreseeable effects). There is also a question of “foreseeable use” versus “intended use.” Hardware engineers may develop virtual reality technologies that are then used for unintended purposes in applications and by software developers.

In the wake of society’s exposure to VR, and due to today’s powerful computer systems, designers are able to create and develop complex interactive virtual worlds. These immersive environments offer numerous opportunities — both good and bad. But organizations and designers are not obligated to obey ethical restraints. There is also the element of hackers, and the issue of immoral exploitation of the technologies. These ethical questions arise partly because VR technologies are pervasive and difficult to classify and identify, and because it is difficult to predict their short- and long-term impacts. VR technologies also raise questions about legal responsibility, for example if software and hardware are used incorrectly or in unethical ways (see  Figure 2  for an outline of the ethical challenges connected with VR technologies).

So as VR has hit the mainstream, much debate has arisen over its ethical complexities. Traditional moral responsibilities do not always translate to the digital world. One aspect we argue is essential to ethical responsibility for virtual reality is that VR solutions must integrate ethical analysis into the design process, and practice dissemination of best practices. In the digital era, organizations and individuals need to uphold ethical and professional responsibilities to society and the public. Creativity should be combined with diligence. Decision making, ethics, and critical thinking should go hand in hand throughout the development process. Development needs to include future predictions, forecasting impact, evaluating and elaborating on possible consequences, and identifying any issues with openness and transparency.

Benefits and Applications of VR

VR technologies are commonplace in today’s marketplace, with key players including, Google, Microsoft, Oculus, Sony, and Samsung seeking to push the limits and applications of VR. VR first appeared in the 1980s, but then faded away. This time VR is here to stay  [3].

Related to VR, we need to acknowledge the importance of active real experience. Active real experience is a fundamental element within VR (i.e, the illusion of “real”). Real, or close-to-reality, experiences have an impact on the user by providing “positive” experience. VR with these touted benefits include games, films, education, training, simulations, communications, medical (i.e, rehabilitation), and shopping.

Due to the availability and flexibility of VR technologies, the number of virtual reality users is forecast to reach 171 million by 2018, with the VR market set to continually grow at an extraordinary rate  [20]. In 2018  [21], the value of the global consumer virtual reality market is estimated to be U.S. $4.5 billion (see  Figure 1 ).

Figure 1. Number of active virtual reality users worldwide beginning in 2014, in millions  [20]. Forecasts for the future are based on previous trends.

Figure 2. Ethical questions and challenges around VR technologies.

Need for Investigation

Currently, there is a lack of information on the short- and long-term physiological impacts of VR. There is also not enough known about who and what types of individuals are using VR (age, types of experience, attitudes, and levels of digital sophistication). Many questions relate to individual attributes, and to what degree the user needs to possess “critical reasoning” abilities.

The intersection of ethics and virtual reality has to date focused primarily on individual issues, for example, specific content, or blood or violence. While these dilemmas are important, many other subtler ethical issues relating to virtual reality demand the attention of designers, scientists, engineers, and related communities. Designers, programmers, and testers usually focus on specific areas, yet they could be involved in contributing to solutions to ethical issues, or they could be responsible for inputting ethical concerns. Frequently, designers must make decisions based on the lens of their knowledge and experiences. But designers’ scope of knowledge does not always encompass the wide range of areas that might impact the public related to physiological, social, or ethical aspects.

Ideally, consumers should be entitled to know what “tests” have been done to ensure public safety, including physical and mental safely, for young and old, in all situations and environments. In addition, any “possible” problems or “neglected” issues should be explicitly stated as a matter of public and moral obligation, not just for legal purposes. Of course, this might be challenged by managerial decisions — any “questioning” or “refusal” (or even public announcement without permission due to NDAs) might impact the individual’s career. Hence, regulators need to step in and ensure “designers” are accessible and the facts are not compromised. Prevention is better than “correction.” We want to avoid reacting to a disaster after it has happened. We want to solve the problem before it manifests itself, using forward thinking, preventative measures to create a safer more reliable future-proof technology or solution.

There is also debate about corporations “waiting” for regulators and legal liabilities to push them towards more moral, safer designs. This attitude can cause significant harm to the public.

Complex Intercoupled System of Components

We need to look VR solutions as a whole, and not just at individual components such as specific components, interactions, or sounds. The interrelated and synergistic operation of the system can have a broader impact on the user. VR combines multiple senses (audio, visual, touch, and movement) each of which influences the immersive experience.

Passive and active involvement of the user, where a user may sit back and “watch” or experience the situation “autonomously” is one possible experience. Another can be more active involvement, where the user is required to “hammer” home the activity or action. The complexities of designing a VR solution involves millions of lines of code and a myriad of three-dimensional content elements that provide texture and geometry, not to mention sounds and specialist hardware like headsets and head-tracking tools. While software testing has always been challenging  [15],  [25], testing the physiological, ethical, and social aspects introduces a new level of difficulty. Challenges of addressing specific scenarios and the complexity of the system are compounded by the new levels of freedom in VR – by the variety of uncertainties and situations that are possible.

VR designs need to account for human interfaces, environmental perceptions, levels of freedom, user-user interactions (social/networking), coordination, and control. Different users and developers will use the hardware/software in different ways, creating multiple outcomes and choices. Strong trends towards online solutions, with user-user interactions and communication increase the possible complexity, and also may lead to “swarms” of virtual users – another area where further research is needed.

We anticipate that before long, swarms of virtual users will be able to interact and communicate. We need to ensure this is done safely. Close coupled interactions of multiple users will also raise questions of privacy and hacking, i.e., of possible intentional tampering or non-legitimate accessing of user resources.

Over-Trusting

The public and users have a predisposition to trust technologies from big brands, often involving acceptance without questioning. While VR solutions possess the power to entertain, engage, and tantalize users, they also have the power to cause significant physiological trauma. There are worrying concerns about over-trusting new technologies. Some questions, designers and users need to ask themselves are:

  • Is it possible, for example, for the VR system to be “hacked” without the user knowing (i.e., modifying/injecting changes into the user’s virtual world).
  • How much does “age” impact the experience in terms of digital awareness, overall experience, mental sensitivity, etc.?
  • How will a user respond to unforeseen troubles? (For example, will they jerk, fall over, scream, harm themselves?)

Interestingly, with regard to the last point, if a person is immersed and believes they are really acting out the experience, they will react as they would in a real situation (i.e., behaviors could emerge). The user would be actively and cognitively engaged with the virtual environment. The ways that VR intertwines user’s psychological and behavioral aspects must be taken into account by the designers.

Regulations

As VR developers and manufacturers pursue significantly different design pathways, it makes it difficult for regulators to keep up and to develop rules and regulatory standards for safety. Among the crucial divides relates to the “applications” of VR, that is, to the type of interfaces, uses, the people who use them, etc. Of course, companies seek competitive advantage and are less interested in sharing information that might injure trade secrets. There needs to be a balance achieved between openness, reliability, and corporate rivalry and profit. Arguably, standards for VR technologies would need to have a specialized set of safety features, beyond traditional engineering tests and approaches to evaluate safety.

While some issues could be evaluated using traditional standards, such as violence and types of content, the immersion aspect of VR introduces additional risk factors that need to be accounted for, including aspects related to VR’s training and manipulation of the mind. Designers will also need to take into account approaches and solutions to reduce risks and harm. They need to insure that users are not left free to expose or harm themselves without guidance.

Relevant professional communities need to become collectively involved in developing rules and guidelines around the design process. Importantly, designers need to incorporate ethical thinking when creating innovative and creative solutions using virtual reality that incorporate safety and impact considerations. Each designer should look upon their creation or design and consider her or his ethical obligations. Designers, testers, and managers need to take a “value-sensitive” approach, and contemplate the implications of what they are creating.

How would we “demonstrate” that a virtual reality technology is safe? This also leads onto questions of levels of safety and risk, and to consideration of ratings. There may also need to be “warnings” emphases, about possible side effects. Also there is the question of how the design will impact others, and questions of social factors. For example, could the technology incite or promote unlawful behavior?

Risks to Children

Studies have shown children are most vulnerable when it comes to VR technologies, as they are highly susceptible and can more easily confuse what is real and what is not real, i.e., they likely may be less able or unable to distinguish between the real world and the virtual world  [18]. For example, in a study by Segovia and Bailenson  [18], young elementary children watched their virtual doppelganger swimming with orcas. When these kids were questioned a week later, they said they believed their virtual experience to be real. In recent studies  [2], young children would connect with “virtual characters” (avatars). Children would see the “avatar” in VR as more real (compared to characters or avatars on other mediums, such as television). The avatar in the virtual environment would be more influential compared to the television equivalent, making it more difficult for the children to inhibit their actions or not follow the avatar’s commands. And it is not only young children who internalize VR scenarios – these scenarios also impact young adults.

For example, elder adolescents have been found to be particularly sensitive to being socially excluded in a virtual environment. What this means is that parents need to be particularly careful about the type of VR content they allow their children to view (see  Figure 3 ). Note that the majority of research has been done on young adults, with little understanding of what happens to younger children when they are exposed to virtual worlds  [5],  [18].

Figure 3. Psychological Factors – Stages of learning and human development impact how our environment and experiences change as we get older.

Post-Traumatic Stress Disorder (PTSD)

Post-traumatic stress disorder (PTSD) is commonly caused by a directly witnessed real-life event that is life threatening or violent in nature. Current clinical diagnosis of PTSD excludes exposures that occur through electronic media, including movies and pictures  [6],  [8],  [16]. However, given the increasing ability to stimulate the range of senses beyond sight and sound, due to the immersive and interactive nature of VR, one has to wonder if at some point these experiences will result in the brain’s fear centers getting rewired in a similar way to that seen in PTSD. One could hypothesize that if a person felt that their VR experience was real (i.e., if they really felt they were at risk of harm), and if they did not have a way of voluntarily ending the experience, they could experience rewiring of fear circuitry of their brain in a manner similar to PTSD. They would then perhaps have a range of PTSD like symptoms.

Desensitization

Funk  et al.   [7] believe repeated exposure to real-life and to entertainment violence could alter cognitive, affective, and behavioral processes, possibly leading to desensitization. The study showed a relationship between real-life and media violence exposure and desensitization as reflected in related characteristics. One-hundred-fifty fourth and fifth graders completed measures of real-life violence exposure, media violence exposure, empathy, and attitudes towards violence. Regression analyses indicated that only exposure to video game violence was associated with (lower) empathy. Both video game and movie violence exposure were associated with stronger pro-violence attitudes. The active nature of playing video games, intense engagement, and the tendency to be translated into fantasy play may explain negative impact, though causality was not investigated in the present design.

Not all Bad

There are “dangers” with anything – however, we must not forget the huge benefits of combining VR with games, in education, rehabilitation, training, and of course, entertainment  [10],  [14],  [24]. VR is a technology – how we use VR, for good or bad, is up to us.

And VR is not the only issue affecting a user’s mental health. Many other factors outside VR influence the individual’s mental health, e.g., work, social life, or family.

VR and games also offer a means of escape. Virtual reality lets our imagination go to new heights because anything is possible. Virtual Reality helps us to test the information learned in a “real-life” situation so that we are able to evaluate – simulate – theoretical knowledge in a practical implementation. With VR we can simulate how machinery works and responds, and we can replicate soft skills such as human actions and behaviors. Another huge area is how virtual reality impacts learning, making learning fun, exciting, and visual.

There has been and continues to be rapid growth in Virtual Reality technologies. It is estimated that there will be 300+ million VR users worldwide by 2020. There remains room for debate around the topic of ethical responsibilities for these technologies. While it can be argued that makers cannot be held 100% responsible for their designs, each company and individual designer should demonstrate reasonable caution, through monitored trials and testing. Designers should not ignore possible mental health and safety issues, or physiological impacts or social and ethical factors. Steps to address these issues might include interactive testing using human and automated users.

We suggest adding additional investigation and analysis testing stages to the development of virtual reality technologies in efforts to protect the public. These tests might not focus on physical health and safety concerns, but rather on physiological and social influences. Currently, no such trials related to physiological or social factors are required, monitored, or enforced. But a large number of virtual reality applications are already on the market, suggesting that technological and economic forces may overrun efforts to protect the public good. The fact that VR is already available does not mean there is no need to address this issue, and it should not be left until it is too late.

The growth of VR technologies leads to an increase in new products and accelerated development of VR in industries such as education, healthcare, household management, tourism, and video games, impacting social and economic sectors. On one hand, there will be huge opportunities for new and innovative VR applications, beyond entertainment uses. On the other hand, there are numerous challenges and ethical issues that need to be addressed. More research needs to be done to investigate the psychological impact of VR, especially on young children, both in the short and long term. However, if the VR economy is to continue to grow while maintaining sustainable healthy new developments, it must be supported by scientific research to investigate the social and ethical issues around these technologies.

ACKNOWLEDGMENT

The author would like to thank the reviewers for taking time out of their schedules to provide insightful and helpful comments to improve this article.

Author Information

essay on virtual reality technology

Ben Kenwright is with the University of Bolton, Bolton, U.K. Email: [email protected].

To view full article, including references and footnotes, click HERE .

IEEE SSIT

Content Clusters

  • Articles (162)
  • Blog Posts (46)
  • Student Activities (2)
  • Call for Papers (14)
  • Announcements (12)
  • Book Reviews (58)
  • Commentary (39)
  • Editorial & Opinion (158)
  • Fiction (2)
  • Industry View (3)
  • Interview (7)
  • Last Word (15)
  • Leading Edge (23)
  • Letters to the Editor (1)
  • News and Notes (13)
  • President's Message (31)
  • Artificial Intelligence (AI) (76)
  • Case Studies (14)
  • Communication Technology (15)
  • Environment (46)
  • Ethics (164)
  • Health & Medical (49)
  • Human Impacts (261)
  • Newsletter (5)
  • Privacy & Security (66)
  • Robotics (50)
  • Societal Impact (394)
  • Standards (5)
  • SSIT 50th Anniversary (2)
  • SSIT Announcements (52)
  • Student Activities (3)
  • Call for Papers (4)
  • Podcasts (14)
  • Videos (46)

Technical Activities Committees

  • Humanitarian Technology
  • Ethics/Human Values
  • Universal Access to Technology
  • Societal Impacts
  • Protecting the Planet
  • Gender Equality and Social Inclusion

Virtual Reality: The Technology of the Future

Virtual reality (VR) is a technology that permits the user to maintain contact with a computer-simulated ambiance whether it is an actual or perceived one. Most of the contemporary virtual reality environments are fundamentally visual encounters, shown either on a computer screen or using particular or stereoscopic displays; however, some simulations encompass more sensory input like sound using speakers or headphones. Some improved versions include tactile feedback, recognized as force feedback. It is true in medical and gaming matters. Subscribers can interact with virtual mediums either using standard input tools or by multimodal devices.

The simulated environment may be just as it is the actual world or it may be at variance with reality. Pragmatically speaking, it is impossible to make a high fidelity virtual reality experience, overwhelmingly due to technical restrictions. It is being hoped that these shortcomings would be eventually fixed as processors; imaging and data communication sciences become more refined and less costly. There are unlimited uses of virtual technology.

The advantages of virtual reality are of diverse types and wide-ranging and engulf everything from games to assist in indoctrinating doctors the expertise of surgery or making pilots aware of the skill of flying aircraft safely. It can be exploited for traffic management, medicine, entertainment, workplace, and industrial layouts. However, along with the credit side, the debit side must also be mentioned which includes its use for the destructive objectives. It can easily be employed in the world of crime and the actual state of war.

The notion of virtual reality first came to the fore in the 30s, when scientists generated the first flight simulator for the preparation of pilots. They aspired to position the pilot in the actual; condition before he or she was capable of flying. Virtual reality has a bundle of positive implications. It provides the crippled people with the ability to do the works which otherwise could not be undertaken by them.

In the virtual world, people in wheelchairs have the maneuverability of freedom that is not found in the real world. “VR models of buildings can be used for several purposes; document management, interior design option analyses by end users, operations planning, evacuation simulations etc. Construction practitioners expect rather widely that vr model can be the user interface to complex data and models in near future. For example by pointing a particular object in the building model the user can obtain all documentation relevant to that object. This feature means that in the nearby future the instructions for service and use can take full advantage of virtual reality technology”. (Timothy Leary, Linda Leary, 2007).

Though currently, the technology is not accessible to every person due to the price factor, however, as has been the fate of every technology, it will evolve with time and its price will come within the range of all people. It is expected to enter the homes of everybody as limelight helmets and supercomputers are developed. Virtual reality has so many implications in the realm of all shapes of architecture and industrial layouts.

Computer-aided design has been a significant device since the middle of the 70s, as it permits the user to have three-dimensional images on the screen of the computer. However, till the time of having the VR helmet and glove to initiative the images onto, it would not be possible to be absorbed in the virtual world. Virtual reality has given a phenomenal uplift in the aviation business as it prevents the requirement to have many diverse prototypes.

Each time, an engineer thinks of fresh aircraft or helicopter a model has to be coined to guarantee that it works whether it will fly efficiently and it is beneficial for the personnel and the passengers. If the model is wrong, the designer has to return to the drawing, alter it and then have another one. This is a very costly and time taking process. By employing, virtual technology, designers can draw, construct and evaluate their aircraft in a virtual ambiance without having real aircraft. It also facilitates the designers to employ different ideas. All the details can be viewed in detail and they can pick up the most feasible one. NASA has exploited virtual reality to have a helicopter and Boeing has employed it to design their innovative aircraft.

By the use of virtual reality, doctors have access to the inside of the human body.

Doctors have even been able to make their way into the thorax and to ensure that radiation beams required to deal with the cancer were in the actual position. “Application of these technologies are being developed for health care in the following area: surgical procedures (remote surgery or telepresence, augmented or enhanced surgery; medical therapy; preventive medicine and patient education; medical education and training; visualization of the massive medical database; skill enhancement and rehabilitation; and architectural design for health care facilities” to date, such applications have improved the quality of health care and in future, they will result in substantial costs savings. Tools that respond to the needs of present virtual environment systems are being refined or developed.

However, additional large scale research is necessary for the following areas; user studies use of robots for telepresence procedures, enhanced system reality, and improved system functionality” (Giuseppe Riva, 1997).

Doctors will in the immediate future be capable of investigating and studying tumors very well and in three dimensions rather than from scans and X-rays. In America, an assassin who was killed on an electronic chair gave his body to science. His corpse was torn into small pieces and was exploited for the objective of using the virtual body for research. It is also hoped that in near future, students will be capable to instruct virtual bodies rather than real patients that would assist in overcoming so many medical problems.

On the minute level, it is being exploited in drug research. Scientists have remained successful in the making of molecules, envision and ‘feel’ how they interact with each other. Before the use of this technology, it was extremely slow and intricate.

Therefore there is a strong probability that virtual reality will influence the pace with which innovative drugs and cures are being coined and facilitate treatment in the future as far as their actualization in real life is concerned. “On a microscopic level, virtual reality is being used in drug research. Scientists at the University of North Carolina are able to create the molecules and then visualize and ‘feel’ how they react with each other. Before the use of virtual reality, this process was very slow and complicated. Therefore, it is likely that virtual reality will have a strong impact on the speed with which new drugs and remedies are developed and become available in the future” (Thinkquest, 2004).

Virtual reality is significant in that it has the potential to envision the unseen or the elusive which in other words is called unpredictable. This would lead to virtual reality executing the repairs in space with the assistance of a robot. In a technique, virtual puppetry a robot is managed by an expert operator and imitates all the movements of the operator.

The options for virtual technology are huge. Future inhabitants of the new towns will be capable of walking in the virtual streets, shops, and other places before even they have been built. There are hopes that big capital cities of the western world will be redesigned while exploiting this technology. Although virtual technology is still at the embryonic stage, its roots can be traced back to the invention of supercomputers.

Though the entertainment industry is renowned for the use of virtual technology, several other industries also exploit the same technology on a much bigger scale. Modern-day meteorologists use this technology to prophesize the weather conditions and help people hailing from different industries for the betterment of their outputs. Now the weather is being predicted in a way that was never available before, more and more precisions have resulted after the use of this legendary technology. The technology helps in foretelling the early warning for severe weather conditions.

Diverse intricate situations have been simulated. One of the biggest single simulations in use in the present times is that of the universe. Scientists are making their utmost endeavors to gauge the formulation of the universe. Chemical and molecular prototyping is being done with the assistance of virtual technology. More efficient car engines can be made with the help of this miraculous technology. The processes by which proteins interact with each other are being unearthed by biologists only after the employment of this technology.

The realm which is expected to benefit most from this technology in education. With the accession of computers, simple lessons can easily be delivered by the computers. More established topics were impossible owing to the incapacity of facilitating face-to-face experience. Currently, driving simulators are being used for the preparation of the drivers for driving automobiles. Many difficult academic subjects can be taught now and it is possible because of virtual technology.

The crippled people can co-exist with their environment. The motorized wheelchairs are being used in a better way by the paralyzed children after being versed with this usage of this technology. The children make progress as they accumulate skills with the aid of the virtual worlds. The kid faces great resistance in crossing the street exploiting the pedestrian signals and thus saving him or herself in the traffic. Completion of each world makes the child aware of the expertise and arms them with the contentment and confidence which they need the most.

The medical industry has substantially benefited from virtual technology. Doctors are employing it to the appropriate cure of some of the most intricate diseases. “They can study images of a cancer patient’s body structure to plan an effective radiation therapy technique. Doctors also commonly use surgical modeling to learn how an organ responds to a given surgical instrument. This allows doctors to master surgical procedures without having to endanger anyone by learning on-the-job.

Some doctors even use virtual reality to cure patients of certain phobias. For example, people with acrophobia (the fear of heights) are often treated with virtual reality. The patient is subjected to a virtual world that exercises their fear. In the acrophobia example, they could be looking over the side of a cliff in their simulation. The patient is usually able to overcome their fear due to the fact that they know the situation is only computer simulated and can not actually harm them” (Keith Mitchell, 1996).

Another domain in which it is getting appreciation is the Internet. Virtual reality can be made available to reinforce its interface to convert it into an actual ‘cyberspace’. The web revolution will be able to sustain its radicalism by multiplying the ability to add three-dimensional interactive graphics. This could be made practical only after the development of VRML. It is combined with java that permits the whole interactive world to be made from a single web page. It helps people to be interacting with others even from far-off places in the virtual world from the central website.

Though the fundamental parts of the technology have been present for two decades back, they were not combined and used with great intensity until recently. Currently, the use of this technology is in the expansionist mode. From scientific research to video games and the internet, everyone appears to have recourse to it. It is one of few genres of technologies that are limited by imagination. The variety of applications in different domains has immense promises and the future of virtual technology seems to be very bright.

Along with the aspirations, virtual technology has been attacked for being an inept method for spearheading nongeographical knowledge. Currently, the conception of ubiquitous computing is very renowned in user interface design and this may be considered as a reaction against virtual reality and its encumbrances. In actual practice, these two forms of interfaces have different objectives and are mutually reinforcing. The end of ubiquitous computing is to induct the computer in the world of the computer rather than impose on the user for entering the world of computer inside. The contemporary inclination in virtual reality is to combine the two user interfaces to generate an immersive and combined experience.

Giuseppe Riva (1997), Virtual Reality in Neuro-Psycho-Physiology. IOS Press. Page, 3.

Timothy Leary, Linda Leary (2007), Computing Essentials. Career Education.

Mitchell (1996), “ Virtual Reality ”. UNIX-guru. Web.

Thinkquest (2004), “virtual relaity”. Web.

Cite this paper

  • Chicago (N-B)
  • Chicago (A-D)

StudyCorgi. (2021, October 12). Virtual Reality: The Technology of the Future. https://studycorgi.com/virtual-reality-the-technology-of-the-future/

"Virtual Reality: The Technology of the Future." StudyCorgi , 12 Oct. 2021, studycorgi.com/virtual-reality-the-technology-of-the-future/.

StudyCorgi . (2021) 'Virtual Reality: The Technology of the Future'. 12 October.

1. StudyCorgi . "Virtual Reality: The Technology of the Future." October 12, 2021. https://studycorgi.com/virtual-reality-the-technology-of-the-future/.

Bibliography

StudyCorgi . "Virtual Reality: The Technology of the Future." October 12, 2021. https://studycorgi.com/virtual-reality-the-technology-of-the-future/.

StudyCorgi . 2021. "Virtual Reality: The Technology of the Future." October 12, 2021. https://studycorgi.com/virtual-reality-the-technology-of-the-future/.

This paper, “Virtual Reality: The Technology of the Future”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: October 12, 2021 .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal . Please use the “ Donate your paper ” form to submit an essay.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Implement Sci Commun
  • PMC10276472

Logo of impscomm

Implementation of virtual reality in healthcare: a scoping review on the implementation process of virtual reality in various healthcare settings

Marileen m. t. e. kouijzer.

1 Centre for eHealth and Wellbeing Research; Department of Technology, Human & Institutional Behaviour, University of Twente, Enschede, Netherlands

Hanneke Kip

2 Department of Research, Transfore, Deventer, Netherlands

Yvonne H. A. Bouman

Saskia m. kelders, associated data.

All dataset(s) supporting the conclusions of this article are available in the included primary studies.

Virtual reality (VR) is increasingly used in healthcare settings as recent technological advancements create possibilities for diagnosis and treatment. VR is a technology that uses a headset to simulate a reality in which the user is immersed in a virtual environment, creating the impression that the user is physically present in this virtual space. Despite the potential added value of virtual reality technology in healthcare, its uptake in clinical practice is still in its infancy and challenges arise in the implementation of VR. Effective implementation could improve the adoption, uptake, and impact of VR. However, these implementation procedures still seem to be understudied in practice. This scoping review aimed to examine the current state of affairs in the implementation of VR technology in healthcare settings and to provide an overview of factors related to the implementation of VR.

To give an overview of relevant literature, a scoping review was undertaken of articles published up until February 2022, guided by the methodological framework of Arksey and O’Malley (2005). The databases Scopus, PsycINFO, and Web of Science were systematically searched to identify records that highlighted the current state of affairs regarding the implementation of VR in healthcare settings. Information about each study was extracted using a structured data extraction form.

Of the 5523 records identified, 29 were included in this study. Most studies focused on barriers and facilitators to implementation, highlighting similar factors related to the behavior of adopters of VR and the practical resources the organization should arrange for. However, few studies focus on systematic implementation and on using a theoretical framework to guide implementation. Despite the recommendation of using a structured, multi-level implementation intervention to support the needs of all involved stakeholders, there was no link between the identified barriers and facilitators, and specific implementation objectives or suitable strategies to overcome these barriers in the included articles.

To take the implementation of VR in healthcare to the next level, it is important to ensure that implementation is not studied in separate studies focusing on one element, e.g., healthcare provider-related barriers, as is common in current literature. Based on the results of this study, we recommend that the implementation of VR entails the entire process, from identifying barriers to developing and employing a coherent, multi-level implementation intervention with suitable strategies. This implementation process could be supported by implementation frameworks and ideally focus on behavior change of stakeholders such as healthcare providers, patients, and managers. This in turn might result in increased uptake and use of VR technologies that are of added value for healthcare practice.

Contributions to the literature

  • Virtual reality is an innovative technology that is increasingly applied within different healthcare settings. Despite its potential to improve treatment, the adoption and uptake of VR are generally lacking.
  • In this scoping review, we identified factors related to the implementation of VR that are important for successful adoption and effective use in practice. However, most often these factors are not sufficiently translated from research outcomes to healthcare practice.
  • The findings of this scoping review contribute to the recognized gaps in the literature, stating recommendations for practice and future research on the systematic implementation of VR in healthcare.

Virtual reality (VR) is increasingly used in healthcare settings as recent technological advancements create possibilities for diagnosis and treatment. VR is a technology that uses a headset to simulate a reality in which the user is immersed in a virtual environment, creating the impression that the user is physically present in this virtual space [ 1 , 2 ]. VR offers a broad range of possibilities in which the user can interact with a virtual environment or with virtual characters. Virtual characters, also known as avatars, can provide the user with a greater sense of reality and facilitate meaningful interaction [ 1 ]. VR interventions have been piloted in various healthcare settings, for example in treating chronic pain [ 3 ], improving balance in patients post-stroke [ 4 ], managing symptoms of depression [ 5 ], improving symptom burden in terminal cancer patients [ 6 ], and applied within treatment for forensic psychiatric patients [ 7 ]. These studies highlight the opportunities for VR as an innovative technology that could be of added value for healthcare. While there is a need for more research on the efficacy of VR in healthcare, experimental studies have shown that VR use is effective in improving the treatment of, among others, anxiety disorders [ 8 ], psychosis [ 9 ], or eating disorders [ 10 ]. However, the added value of VR is often not observed in practice due to the lack of usage of this technology.

Regarding uptake in clinical practice, VR is still in its infancy [ 11 , 12 ]. Various barriers are identified as limiting the uptake, such as a lack of time and expertise on how to use VR in treatment, a lack of personalization of some VR applications to patient needs and treatment goals, or the gap in knowledge on the added value of VR in a specific setting [ 11 , 13 ].

Not only VR uptake is challenging, but also other eHealth technologies experience similar difficulties in implementation [ 14 ]. eHealth is known as “the use of technology to improve health, well-being, and healthcare” [ 14 ]. For years, implementation has been out of scope for many eHealth research initiatives and healthcare practices, resulting in technologies that have not surpassed the level of development [ 15 ]. For these technologies to succeed and be used as effectively as intended, they must be well integrated into current healthcare practices and connected to the needs of patients and healthcare practitioners [ 13 ]. As a result, a focus on the implementation is of added value. It has the potential to improve the adoption, uptake, and impact of technology [ 16 ]. However, implementation procedures for VR technology still seem to be understudied in both research and practice [ 12 , 17 ].

One of the reasons for the lacking uptake of (eHealth) technology is the complexity of the implementation process [ 18 , 19 ]. The phase between the organizational decision to adopt an eHealth technology and the healthcare providers actually using the technology in their routine is complex and multifaceted [ 18 , 19 ]. This highlights the importance of a systematic and structured implementation approach that fits identified barriers. The use of implementation strategies, known as the “concrete activities taken to make patients and healthcare providers start and maintain use of new evidence within the clinical setting,” can help this process by tackling the implementation barriers [ 20 ]. These strategies can be used as standalone, multifaceted, or as a combination [ 21 ]. Often, they are part of an implementation intervention, which describes what will be implemented, to whom, how, and when, with the strategies as a how-to description in the intervention [ 17 ]. In addition, according to Proctor et al. [ 22 ], it is important to conceptualize and evaluate implementation outcomes. Implementation outcomes, such as acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability, can be used to set specific and measurable implementation objectives. Furthermore, assessing implementation outcomes will increase the understanding of the success of the implementation process and form a starting point for studies focusing on the effectiveness of VR in healthcare [ 22 ].

While implementation interventions could help the systematic implementation of VR, they are rarely used in practice. A way to stimulate systematic implementation and help develop an implementation intervention is by using an implementation model to guide this process. While a broad range of implementation models have been developed, there is still limited use of these models to structure the implementation of VR in healthcare [ 23 ]. One framework that could be used to identify important aspects of implementation is the NASSS framework, which investigates the n on-adoption, a bandonment, and challenges to s cale up, s pread, and s ustainability of technology-supported change efforts in health and social healthcare [ 24 ]. The NASSS framework does not only focus on the technology itself, but includes the condition of the target group, the value proposition, the adopter system (staff, patients, and healthcare providers), the healthcare organization(s), the wider system, and the embedding and adoption of technology over time [ 24 ]. The framework is used to understand the complexity of the adoption of new technologies within organizations [ 25 ]. However, it remains unclear if and what factors of the NASSS framework, or any other implementation framework, can be found in the implementation of VR in various healthcare settings.

In summary, virtual reality interventions have the potential to improve the quality of care, but only if implemented thoroughly. As VR use becomes more prevalent, studies should expand the focus to identify factors specifically related to the implementation of this new technology [ 19 ]. It is advised to perform a needs assessment, understand potential barriers to implementation early, set implementation objectives, and identify fitting implementation strategies before testing VR interventions in practice [ 26 ]. Therefore, this scoping review aims to examine the current state of affairs in the implementation of VR technology in healthcare settings and provide an overview of factors related to the implementation of VR. Within this research, the following sub-questions are formulated: (1) Which barriers play a role in the implementation of VR in healthcare? (2) Which facilitators play a role in the implementation of VR in healthcare? (3) What implementation strategies are used to implement VR in healthcare? (4) To what extent are specific implementation objectives and outcomes being formulated and achieved? (5) What are the recommendations for the implementation of VR in healthcare?

To address the study aims, a scoping review was undertaken on the current state of affairs regarding the implementation of virtual reality in healthcare settings. Due to the broad scope of the research questions, a scoping review is most suitable to examine the breadth, depth, or comprehensiveness of evidence in a given field [ 23 ]. As a result, scoping reviews represent an appropriate methodology for reviewing literature in a field of interest that has not previously been comprehensively reviewed [ 24 ]. This scoping review is based on the methodological framework of Arksey and O’Malley [ 27 ] including the following steps: (1) identifying the research questions, (2) identifying relevant studies, (3) study selection, (4) charting the data, and (5) collating, summarizing and reporting the results. A protocol was developed and specified the research questions, study design, data collection procedures, and analysis plan. To the authors’ knowledge, no similar review had been published or was in development. This was confirmed by searching academic databases and the online platforms of organizations that register review protocols. The protocol was registered at OSF (Open Science Framework) under registration https://doi.org/10.17605/OSF.IO/5Z3MN . OSF is an online platform that enables researchers to plan, collect, analyze, and share their work to promote the integrity of research. This scoping review adheres to the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) [ 26 ].

A comprehensive, systematic electronic literature search was undertaken using three databases: Scopus, PsycINFO, and Web of Science. In each database, the same search strategy was used. Search terms were identified and included in the search strategy for three main categories relevant to the research questions: implementation, virtual reality, and healthcare. The search terms within a category were combined using the Boolean term “OR” and the term “AND”was used between the different categories. The search strategy was piloted to check if keywords and databases were adequate and adjustments were made whenever necessary. The full electronic search strategy can be found in Appendix 1 .

Study inclusion and exclusion criteria

All identified records published up until February 2022, that were peer-reviewed, and written in English, Dutch, or German, were included in the initial results. All references and citation details from different electronic databases were imported into the online review management system Covidence and duplicate records were removed automatically. A three-step screening approach, consisting of a title, abstract, and full-text screening, was used to select eligible studies.

Records were included if the titles indicated that the article focused on VR within a healthcare setting and that VR was used as a tool for prevention or treatment of patients. Because of the possibility of implementation not being mentioned in the title, broad criteria were used to prevent the unjust exclusion of relevant studies. In addition, records were included if they outline (parts of) the implementation process of VR technology (e.g., needs assessment, planning, execution, or lessons learned). Furthermore, the primary target group of the VR technology had to be patients with mental or physical disorders. If the studies focused solely on augmented reality (AR) or mixed reality (MR) and/or described a VR technology that was utilized to train healthcare professionals, they were excluded. Additionally, studies were excluded if full texts could not be obtained or if the study design resulted in no primary data collection, such as meta-analyses, viewpoint papers, or book chapters.

In the first step, two authors (MK & HK) screened all titles for assessment against the inclusion and exclusion criteria for the scoping review. Titles were included based on consensus between both authors. In the event of doubt or disagreement, the title was discussed by both authors. After screening the titles, both authors screened and assessed the abstracts using the inclusion and exclusion criteria. Abstracts were included or excluded based on consensus. In the final step, one author screened the full-text articles (MK). Reasons for excluding and any reservations about including were discussed with the other authors. The results of the search are reported in full and presented in a Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) flow diagram [ 28 ] (Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is 43058_2023_442_Fig1_HTML.jpg

Search strategy and results

Data extraction strategy

The data extraction of this scoping review is mostly based on the guidelines of the Cochrane Handbook for Systematic Reviews of Interventions [ 29 ]. A systematic assessment of study quality was not performed because this review focused on giving a broad overview of all factors related to the implementation of VR. This resulted in a heterogeneous sample of included study topics and designs: ranging from explorative qualitative studies to reflective quantitative studies. The data extraction process started with the creation of a detailed data extraction form based on the research questions in Microsoft Excel. This form was generated to capture the most relevant information from all obtained studies and standardize the reporting of relevant information. The extracted data included the fields as presented in Table ​ Table1. 1 . One author (MK) filled out the data extraction forms; in case of uncertainties, a second author was consulted (HK). Secondly, for each category, relevant text fragments from each study were copied from the articles into the data extraction forms.

Information extracted from included articles

Data synthesis and presentation

To answer the first and second research questions, the fragments from the data extraction forms were coded inductively. To answer the third and fourth research questions, fragments were first coded deductively, based on the main categories of the NASSS framework: technology, adopters, organization(s), wider system or embedding, and adaptation over time [ 24 ]. Second, within these categories, the specific barriers and facilitators were coded inductively to identify recurrent themes. The implementation recommendations were coded inductively to answer the fifth and last research question. The first author executed the coding process, which included multiple iterations and constant adaptations until data saturation was reached. During this iterative process, multiple versions of the coding scheme were discussed with all authors and adapted accordingly.

Search results

The search strategy, the number of included records, and the reasons for full-text exclusion are provided in Fig.  1 . The main reason for excluding full-text articles was that studies focused on the usability or effectiveness of VR, rather than on the needs assessment, planning, execution, or lessons learned from the implementation process of VR.

Study and technology characteristics

An overview of the characteristics of the 29 included records and the used VR technology is provided in Appendix 2 . The following study designs were identified: qualitative ( n  = 13), quantitative cross-sectional ( n  = 10), and studies that used qualitative as well as quantitative methods ( n  = 6).

Of the 29 included records, 11 focused on VR use in rehabilitation clinics. Additional settings in which VR was applied are general health clinics, mental health clinics, or clinics for specific disorders, e.g., eating disorder clinics or burn clinics. The goal of VR technology was often to be of added value as a treatment tool. It was used to improve movement in rehabilitation patients ( n  = 11) or decrease anxiety in patients with a stress-related disorder ( n  = 2). In addition, it was applied to offer distraction or relaxation during medical procedures ( n  = 4). In addition to the variety in settings and applications of VR, the type of technology that was applied differed as well: from interactive VR ( n  = 26), in which patients can be immersed in a virtual environment, such as a shopping street or a restaurant, via a VR headset and interact with this environment, to (360°) videos ( n  = 4) in which patients are immersed in a virtual environment shown on a (computer) screen, with limited to no possibility for interaction.

Implementation characteristics

An overview of the 29 included studies and the implementation characteristics, such as the use of an implementation model or the stage of implementation research are presented in Appendix 2 . In this review, 8 of the 29 studies used a theoretical framework to structure implementation or data analysis. The Consolidated Framework for Implementation Research (CFIR) [ 30 ] was used in 3 studies and the Decomposed Theory of Planned Behavior (DTPB) [ 31 ] was also used in 3 studies. In addition, the Unified Theory of Acceptance and Use of Technology (UTAUT2) [ 32 ] was used in a single study, and the Innovation Diffusion Theory [ 33 ] was applied in one study as well.

Of the 29 included studies, the data collection of 12 studies took place before actual implementation and focused on factors, expected by stakeholders, that could influence future implementation. The data collection of the other 17 studies took place after actual implementation and reflected on existing factors related to implementation. Thus, most identified barriers, facilitators, and recommendations stated in this review were observed in studies that evaluated an existing implementation process.

Barriers to implementation

Barriers to the implementation of VR were identified based on relevant fragments from the articles. In 26 records, a total of 69 different barriers were identified and divided into categories of the NASSS framework. All barriers are provided in Table ​ Table2. 2 . The barriers are explained in the accompanying text below.

Barriers to implementation and the number of publications they were mentioned in ( n )

A broad range of barriers was relevant to the implementation of VR in healthcare. Most identified barriers were related to the organization category of the NASSS framework. These were mainly focused on the lack of practical resources for healthcare providers to use VR. For example, the organization does not schedule sufficient time for healthcare providers to learn how to use VR and how to integrate VR into practice. In addition to a lack of time, not enough technical support, treatment rooms for VR, and VR equipment to treat patients were mentioned as organizational barriers.

Frequently mentioned barriers related to the adopters were factors that negatively influence healthcare providers’ opinions of VR. First, a lack of research and evidence on the added value of VR was mentioned as a barrier. Second, a perceived lack of experience in working with VR was said to cause a lack of confidence and self-efficacy in healthcare providers to work with VR during treatment. The perceived lack of time and limited opportunities to learn how to use VR contributed to this feeling.

Furthermore, technical barriers were identified to hinder VR implementation. Functional issues, such as technical malfunctioning of VR hardware or software, or a lack of client safety while wearing a VR headset in the limited space of the treatment room that limits freedom of movement were most frequently mentioned as barriers. Related to the VR headset, a lack of physical comfort for the patient when wearing the VR headset and the feeling of isolation while wearing the headset were frequently mentioned as barriers.

Lastly, barriers related to the condition, value proposition, wider system, and embedding and adoption over time categories of the NASSS framework were less frequently identified. The conditions and physical limitations of patients that could negatively influence VR use, such as several cognitive limitations, distress, or cybersickness during VR, were mentioned as barriers. Related to the value proposition, barriers such as high costs to purchase VR equipment or the lack of time for maintaining the VR hardware were mentioned. In addition, the lack of personalization to patients’ needs and treatment goals was mentioned as a barrier. The barriers related to the wider system and adoption over time, such as organizations not being innovation-minded or the lack of insurance reimbursement to compensate for costs of VR use, were mentioned less frequently.

Facilitators to implementation

Besides barriers, a total of 53 different facilitators to the implementation of VR in healthcare were identified in 26 records. Facilitators were identified based on relevant fragments from the articles and are divided into categories of the NASSS framework. They are mentioned and explained in Table ​ Table3 3 and the accompanying text below.

Facilitators to implementation and the number of publications they were mentioned in ( n )

In comparison to the barriers, facilitators to implementation were identified less frequently in the included studies. Similar to the barriers, most facilitators were related to the organization category of the NASSS framework. As an organization, providing support, time, room, and technical system support to healthcare providers to learn and use VR were mentioned most frequently as facilitators.

In multiple studies, it was mentioned that adopters of VR technology need training and education on how to use and integrate VR into treatment. Healthcare providers want to increase their knowledge, skills, and experience with VR to feel confident and increase self-efficacy in using VR in treatment with patients. Besides, as a facilitator in the adopter’s category, it is mentioned that having access to evidence on the added value of VR for treatment is a major facilitator in VR implementation because healthcare providers feel the use of VR is validated within the treatment.

Lastly, facilitators in the condition, technology, value proposition, wider system, and embedding and adoption over time category of the NASSS framework were identified less frequently. For example, when looking at the sociodemographic factors of patients, the young age of patients was identified as a facilitator since these people tend to be more open to new technology and treatments and feel more comfortable using VR. Related to technology, ensuring client safety was mentioned as a facilitator, that is creating a physically safe space in the treatment room for patients to use VR. This safe and controlled environment was also identified in the value proposition category. Meaning that healthcare providers can create a safe space for patients to practice challenging behavior. Lastly, being innovation-minded as an organization and VR becoming more and more commonplace and affordable to scale up were both mentioned as facilitators in the wider system category and the adoption over time category of the NASSS framework.

Implementation strategies, objectives, and outcomes

An overview was created of the implementation strategies, objectives, and outcomes that were extracted from the included studies (see Appendix 2 ). In two studies, a clear implementation objective was mentioned [ 13 , 43 ]. These objectives both focused on designing an implementation intervention, the knowledge translation (KT) intervention, to translate knowledge about the use of VR to the healthcare provider. In addition, they aimed to identify factors that influenced VR adoption and healthcare providers’ support needs.

Of the 29 included records, 8 studies described actual implementation strategies [ 13 , 34 , 35 , 43 , 44 , 48 , 53 , 60 ]. Most were mentioned in studies that collected data after implementation and reflect on existing implementation processes. In the included studies that described expected implementation factors, implementation strategies were most often not described. These studies focused on identifying potential barriers and/or facilitators in preparation for the implementation phase and did not evaluate the used strategies.

A summary of the described implementation strategies mentioned in the included records is displayed below in Table ​ Table4. 4 . Examples of strategies focused on practical resources were VR equipment to be used in treatment, treatment rooms in which the VR technology can be set up and used, and time for healthcare providers to learn about VR use. In addition, training and education on VR use were mentioned as important strategies. Hands-on interactive training, e-learning modules, mentorship for support and troubleshooting, and matching protocols and guidelines on how to use VR were mentioned. To set up VR treatment, an identified implementation strategy is to give support to healthcare providers in selecting appropriate content in VR that fits the patient’s needs and give information on how to instruct the patient about VR treatment. Lastly, implementation strategies that help to increase the motivation of healthcare providers to use VR were addressed. For example, having sufficient time to discuss the potential and added value of VR or having support from champions or mentors, experienced healthcare providers who share their experience with VR, to motivate others to integrate VR into their treatment practice were used during implementation.

Summary of implementation strategies mentioned in included records

The explicit conceptualization of implementation outcomes and the use of these outcomes to formulate implementation objectives or design implementation strategies was not described as such in the included records. The concepts of acceptability, adoption, uptake, or feasibility were mentioned in 12 records (see Appendix 2 ); however, they were not integrated as outcomes into a systematic implementation process.

Recommendations for implementation

In Table ​ Table5, 5 , an overview of the 51 different recommendations for the implementation of VR in healthcare that were mentioned in 20 records is provided. These recommendations were inductively coded and divided into seven categories: (1) Increase understanding of patient suitability, (2) Improve knowledge and skills on VR use, (3) Improve healthcare providers’ engagement with VR, (4) Have support staff available, (5) Points of attention for developing VR treatment, (6) Support functionality of VR hardware and software, and (7) Design and development of implementation.

Recommendations on implementation and the number of publications they were mentioned in ( n )

The first recommendation was to increase the understanding of patient suitability. In other words, it should be clear for healthcare providers how they can determine for which patients VR treatment is a fitting option. One way to determine patient suitability is to take into account the functional limitations of patients, such as their level of mobility or communication skills, before referring patients to VR treatment. Next to functional limitations, one should take into account cognitive limitations and any sensitivity to cybersickness. Patient suitability can be dependent on the goal of VR treatment, as some functional or cognitive limitations are not always a barrier to VR use.

The second recommendation was to improve the knowledge and skills of healthcare providers on VR use. Training programs and other educational resources, such as training days, online meetings, or instruction videos, that should be developed and disseminated to healthcare providers were mentioned as key elements to improving knowledge and skills.

The third recommendation was to improve healthcare providers’ engagement with VR. To accomplish this, the benefits of VR use and its possible contributions to treatment should be communicated to healthcare providers and patients. The use of successful example cases and disseminating supportive evidence of the added value of VR were mentioned as options to increase the engagement of healthcare providers with VR.

The fourth recommendation was to have sufficient support staff available to support VR use during treatment and maintain VR equipment. In addition, champions or mentors, healthcare providers experienced in VR treatment, were mentioned to promote uptake and increase the self-efficacy of other healthcare providers in VR use.

The fifth recommendation was related to developing VR treatment. The included studies gave some inconsistent suggestions on the frequency of use, from daily to once a week. Important aspects of developing a VR treatment are to set clear treatment goals, let the patient become familiar and comfortable with the VR equipment and software, and increase the treatment difficulty step by step.

The sixth recommendation was to support the functionality of VR hardware and software and ensure that it fits the user. Software should be appropriate for the patient’s needs, and age, and should fit the treatment setting. For example, VR software for forensic mental healthcare patients with aggression regulation problems should be able to let patients practice self-regulation strategies in virtual environments in which their undesired behavior is triggered. This could be a bar or supermarket with strangers for one patient, or a more intimate setting with a partner at home for another. The hardware needs to be adaptable for the limited mobility of patients, for example, patients that are wheelchair-bound. In addition, the VR hardware should still give the possibility for healthcare providers and patients to interact during the use of VR. The patient needs to be able to hear the voice of the healthcare provider.

The seventh and last recommendation was related to the design and development of the implementation of VR in practice. In multiple studies, it was advised that healthcare organizations use a structured, multi-model implementation intervention to support the needs of stakeholders and address barriers to VR use. The key stakeholders should be engaged during the development process of implementation interventions. It was recommended to use a theoretical framework, such as the Consolidated Framework for Implementation Research (CFIR) [ 46 ] or the Decomposed Theory of Planned Behavior (DTPB) [ 47 ] to guide the development of relevant implementation strategies to enhance the uptake of VR in healthcare practice.

Principal findings

This scoping review was conducted to provide insight into the current state of affairs regarding the implementation process of virtual reality in healthcare and to identify recommendations to improve implementation research and practice in this area. This review has resulted in an overview of current implementation practices. A broad range of study designs was identified: from qualitative studies that described expected factors of implementation, to quantitative methods that summarized observed factors. From the included studies, it can be concluded that the main focus of the implementation of VR is on practical barriers and facilitators, and less attention is paid to creating a systematic implementation plan, including concrete implementation objectives, developing suitable implementation strategies to overcome these barriers, and linking these barriers or facilitators to clear implementation outcomes. Only two studies described objectives for implementation and the practical strategies that were used to reach these objectives. Most implementation strategies that were described were related to practical resources and organizational support to create time and room for healthcare providers to learn about VR and use it in treatment. Despite differences in the type of VR technology, healthcare settings, and study designs, many studies identified the same type of barriers and facilitators. Most identified barriers and facilitators focused on the adopter system and organization categories of the NASSS framework [ 24 ], e.g., the needs of healthcare providers related to VR use and the organizational support during the implementation of VR. The most frequently mentioned barriers were a lack of practical resources, a lack of validated evidence on the added value of VR, and a perceived lack of experience in working with VR. This review showed that facilitators were studied less than barriers. Most of the included studies only described the implementation barriers. However, in the studies that did mention facilitators, similar themes were found between identified barriers and facilitators, mostly related to practical resources, organizational support, and providing evidence of the added value of VR were found. The content of the recommendations for the implementation of VR fits with the foregoing.

Comparison with prior work

Despite the importance of concrete strategies to successfully implement VR [ 20 ] and the conceptualization of implementation outcomes to understand the process and impact of implementation [ 22 ], there is a lack of research on this systematic implementation approach. In this review, only a few studies used a theoretical framework to structure implementation or data analysis. Frameworks that were mentioned most often were the Consolidated Framework for Implementation Research (CFIR) [ 30 ], and the Decomposed Theory of Planned Behavior (DTPB) [ 31 ]. However, none of the studies that mentioned the use of these models described an explicit link between the separate strategies, barriers, or facilitators and the integrated systematic implementation process. This illustrates the gap in research between identifying factors that influence implementation and linking them to practical strategies and implementation outcomes to form a coherent implementation intervention. The development of a coherent implementation intervention was only mentioned in two studies that were included in this review. To illustrate, one study set up an implementation intervention that promotes clinician behavior change to support implementation and improves patient care [ 63 ]. A coherent intervention could be an option to structure the implementation process and bridge the gap between knowledge of the use of VR to actual uptake in practice [ 63 ]. However, from implementation frameworks, such as the NASSS framework [ 24 ] or the CFIR [ 30 ], it is clear that the focus should lie on a coherent multilevel implementation intervention that focuses on all involved stakeholders and end-users, not only on one stakeholder.

The importance of focusing on the behavior change of all involved stakeholders, such as healthcare providers, patients, support staff, and managers, is reflected in the results of this review. Most barriers, facilitators, strategies, and recommendations are related to stakeholders within the healthcare organization that need to change their behavior in order to support implementation. For example, healthcare providers are expected to learn new skills to use VR and organizational management needs to make time and room available to support healthcare providers in their new learning needs and actual VR use during treatment. This highlights the importance of focusing on strategies that target concrete behavior of stakeholders for successful implementation. Identifying concrete behavior that is targeted in an implementation intervention can help describe who needs to do what differently, identify modifiable barriers and facilitators, develop specific strategies, and ultimately provide an indicator of what to measure to evaluate an intervention’s effect on behavior change [ 64 ]. The focus on behavior in implementation is not new, it is an important point of attention in the implementation of other eHealth technology [ 14 ]. However, based on the results of this scoping review, this focus is lacking in research on VR implementation.

To design implementation interventions that focus on the behavior change of stakeholders, existing intervention development frameworks can be used. An example is Intervention Mapping (IM). Intervention Mapping is a protocol that guides the design of multi-level health promotion interventions and implementation strategies [ 65 , 66 ]. It uses a participatory development process to create an implementation intervention that fits with the implementation needs of all involved stakeholders [ 65 ]. Eldredge et al. [ 65 ] and Donaldson et al. [ 67 ] IM can provide guidance on overcoming barriers by applying implementation strategies based on behavioral determinants and suitable behavior change techniques [ 65 ]. For example, when reflecting on the implementation strategies described in this review, providing feedback as a behavior change method can be used during the education or training on VR use to support the learning needs of healthcare providers. In addition, providing opportunities for social support could be seen as the behavior change technique behind the need for support and discussion of VR use during intervision groups with other healthcare providers.

Implications for practice and future research

The results from this review provide various points of departure for future implementation research and implications for practice. An important implication for both is the need for a systematic approach to the implementation process. Most studies identified in this review focused only on barriers or facilitators to implementation, not paying attention to the systematic process of developing an implementation intervention that specifies implementation objectives, describes suitable strategies that fit with these barriers and facilitators, and conceptualizes implementation outcomes to evaluate the effectiveness of these strategies. The development of an implementation intervention should preferably be supported by theoretical implementation frameworks such as the Consolidated Framework of Implementation Research [ 30 ], or the NASSS framework [ 24 ]. In this review, all implementation factors could be coded with and analyzed within the categories of the NASSS framework. Indicating its usefulness in structuring implementation research. Future research could focus on applying and evaluating such implementation frameworks to the implementation of VR in healthcare, specifying factors related to the implementation of VR and focusing on all phases and levels of implementation.

In addition, it could be valuable to focus on existing intervention development frameworks, such as Intervention Mapping, to guide the design of a complete implementation intervention. Future research could apply these existing frameworks in an implementation context, reflect on the similarity in working mechanisms and evaluate their influence on the implementation process and the behavior change of the involved stakeholders. This way, a first step in identifying the added value of systematic implementation intervention development can be made.

Furthermore, as being aware and convinced of the added value of VR within the treatment of patients is seen as an important facilitator of implementation for healthcare providers and organizations, it would be valuable for future research to focus on the evaluation of the efficacy of VR within healthcare practice. However, this raises an interesting paradox. Healthcare organizations and healthcare providers would like to have evidence of the added value of VR before investing in the technology for its implementation, but the efficacy of VR in practice can only be determined in an ecologically valid way when it is already thoroughly implemented in healthcare practice.

Strengths and limitations

This review set out to give an overview of factors that are related to the implementation practice of VR in healthcare. A strength of this study is that it used the NASSS framework to structure the analysis and review process. The use of an implementation framework contributed to systematic data collection and analysis, which can increase the credibility of the findings [ 68 ]. However, the use of the NASSS framework also revealed some drawbacks. Although all implementation factors were categorized within the categories of the NASSS framework, this coding was limited by the description of these categories and the overlap between some categories. For example, most barriers and facilitators that were categorized under organization, adopters, or technology were relevant for sustainable embedding and thus could fit in the category “embedding and adaptation over time” as well. In addition, the description of the category “condition,” the illness of the patient, and possible comorbidities, which are often influenced by biomedical and epidemiological factors [ 24 ], is too limited to describe all factors related to patient suitability for VR. The condition of a patient within mental healthcare is often related to other aspects, such as sociodemographic factors like age, technical skills, and feeling comfortable using new technology. All these factors could influence patient suitability for VR. Besides, in most included studies, the barriers or facilitators were not described in great detail, which made the coding process within the NASSS categories more difficult.

Furthermore, when titles of screened records did not focus on the implementation process of VR, e.g., studies that only focused on usability or effectiveness, they were excluded. Since usability studies could still partly focus on implementation, this may have caused us to miss publications that could provide interesting insights on implementation but whose main focus was other than that. We tried to overcome this limitation by selecting detailed inclusion and exclusion criteria for the literature search and abstract screening. The study was excluded only when there was no indication of a link between usability and implementation.

In addition, the full-text screening and data-extraction process were executed by one researcher. This could have caused us to miss information related to the topic. However, since the researcher used inclusion criteria that were thoroughly discussed during the title and abstract screening, and used a detailed data-extraction form, the chances of missing information are considered to be low. Furthermore, the first and second authors both extracted data from a few full-text articles, and in case of doubt, full-text were discussed with both authors.

Furthermore, because this scoping review aimed to provide an overview of the current state of affairs related to the implementation of VR in healthcare, all available studies were included, regardless of their quality and type of results. This is in line with the general aim of scoping reviews, which is to present a broad overview of the evidence on a topic. Since a quality assessment was not conducted, not all results of included studies might be valid or reliable. In addition, most of the barriers, facilitators, and recommendations stated in this review are observed in studies that took place after actual implementation. However, some of these factors were mentioned as potential factors related to implementation in studies that collected data before actual implementation. These factors were described as expected factors by involved stakeholders, but not observed. Therefore, these findings should be interpreted with care.

This scoping review has resulted in an initial overview of the current state of affairs regarding the implementation of VR in healthcare. It can be concluded that in the included publications, a clear focus on practical barriers and facilitators to the implementation of VR has been identified. In only a few studies implementation frameworks, specified strategies, objectives, or outcomes were addressed. To take the implementation of VR in healthcare to the next level, it is important to ensure that implementation is not studied in separate studies focusing on one element, e.g., therapist-related barriers, but that it entails the entire process, from identifying barriers to developing and employing a coherent, multi-level implementation intervention with suitable strategies, clear implementation objectives and predefined outcomes. This implementation process should be supported by implementation frameworks and ideally focus on behavior change of stakeholders such as healthcare providers, patients, and managers. This in turn might result in increased uptake and use of VR technologies that are of added value for healthcare practice.

Acknowledgements

Not applicable.

Appendix 1. Full electronic search strategy

Search terms, search string.

TS = (implement* OR adopt* OR disseminat* OR introduc* OR “uptake”) AND TS = (“virtual reality” OR VR OR “virtual technolog*” OR “virtual environment”) AND TS = (health* OR “care” OR treat*)

Appendix 2. Study, technology, and implementation characteristics per study

Table 6 Study characteristics, characteristics of VR technology, and implementation characteristics per study

Authors’ contributions

MK, HK, and SK designed the study and wrote the protocol. MK conducted literature searches. MK and HK screened the titles and abstracts. MK analyzed the data and wrote the first draft of the manuscript. HK, SK, and YB contributed to the final manuscript and the authors have read and approved the final manuscript.

Funding for this study was provided by Stichting Vrienden van Oldenkotte. They had no role in the study design; collection, analysis, or interpretation of the data; writing the manuscript; or decision to submit the paper for publication.

Availability of data and materials

Declarations.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Virtual reality technology for learning detailed design in landscape architecture

  • Open access
  • Published: 23 April 2024
  • Volume 3 , article number  39 , ( 2024 )

Cite this article

You have full access to this open access article

essay on virtual reality technology

  • Jaeyoung Ha   ORCID: orcid.org/0000-0002-8096-0567 1 ,
  • Kawthar Alrayyan 1 &
  • M. M. Lekhon Alam 2  

There is much interest in employing computer technology in design professions and education. However, few attempts have been made to apply immersive visualization technology to learn design details in landscape architecture. This study aims to illuminate how virtual reality (VR) technology helps students with design details in landscape architecture. Students were given a course project to create 3D models such as boardwalk structures located in residential pond areas. Based on their 3D models, we asked 16 research participants to answer survey questionnaires about the perception of realism, scale, and effectiveness of using computer technology in semi-immersive environments (e.g., monitor display-based) as opposed to fully immersive environments (e.g., VR head-mounted display-based). The results of our study showed that students had a higher realism in fully immersive environments compared to semi-immersive environments. In terms of perception of scale, participants perceived the height of the simulated model to be higher than they had anticipated in fully immersive environments. While there were no statistically significant findings regarding the effectiveness of design evaluation in the two modalities, students mentioned that VR technology can effectively assist in creating design details, as it provides them with a better understanding of the spatial characteristics of models.

Avoid common mistakes on your manuscript.

1 Introduction

As computer graphics and visualization rapidly develop and gain widespread popularity, there is a significant surge in the adoption of computer software for design education [ 1 , 2 ]. Learning two-dimensional (2D) and three-dimensional (3D) software, including AutoCAD, Rhino, and Sketchup, improves students’ ability to visualize and articulate their ideas with greater freedom [ 3 ]. Using computer software increases accuracy, neatness, and ease of modification for design works, thereby achieving time and cost efficiency for students [ 4 ].

Despite the benefits of computer graphics and visualization, students encounter several limitations when using 2D and 3D technology in the design field. One significant challenge lies in the conventional 2D monitor (e.g., LCD, LED display), which delivers a lower realism of visual representation in design models [ 5 , 6 ]. This limitation is problematic because students struggle to bridge the gap between imagination (their design works) and real-world design problems [ 7 ]. The computer drawings are unable to fully depict how their class design outputs are generated and produced on-site due to the distance of the model displayed on a monitor screen [ 8 ]. Thus, students do not understand the real scale and texture of their model when it is being built. As a result, this challenge discourages students from properly evaluating their design project and identifying potential flaws in their model.

The emerging immersive 3D visualization technologies, such as virtual reality (VR), are unlocking new opportunities for design exploration and presentation [ 9 ]. VR employs computer graphics systems in conjunction with a diverse array of interface and display devices to promote the sense of immersion in an interactive 3D computer-generated environment, wherein the virtual exhibits a spatial presence [ 10 ]. Sensory information is transmitted through a head-mounted display (HMD) equipped with a head movement tracking system, which provides seamless real-time visual representation, thereby allowing for a sense of full immersion [ 11 , 12 ] and a feel of presence [ 13 , 14 ]. VR serves as a tool for people to visualize, manipulate, and interact with intricate computer systems and data information [ 15 , 16 ]. It has significant potential for solving today's real-world problems [ 17 ].

The architectural professions use VR for various purposes, including communicating design specifications, finding appropriate design solutions, providing interactive design experiments, and improving understanding and learning of design concepts [ 18 ]. VR technology introduces the crucial aspects of immersion and interactivity to 3D computer-generated models, allowing unprecedented “exploration” that surpasses the limitations of the conventional forms of representation in professional domains [ 19 ]. The adoption of VR applications in various design fields has yielded significant improvements in teaching and training performance, enabling engineers and designers to apply theoretical knowledge to the practice of industry through real-time experiences [ 20 , 21 , 22 ].

In particular, VR becomes an integral design educational tool as it facilitates the transition from a teacher-centered methodology to a student-centered approach [ 23 ]. This is partially attributed to VR's capability to enable students to reflect on the functional and formal characteristics of architectural spaces [ 24 ]. Building upon this foundation, there is a small but growing body of studies investigating how VR plays a significant role in improving the spatial understanding of 3D spaces for untrained designers by evaluating students’ spatial perception [ 25 ]. Paes et al. [ 26 ] found that students generally have a better spatial perception of their design model in fully immersive environments. To verify this, spatial perception questionnaires were utilized to assess vertical distances, area of space, spatial positioning of elements, and shape using a 5-point Likert scale. Similarly, Hou et al. [ 27 ] also mentioned that the use of VR in design education can enhance students’ perception of scale, as it has an equivalent effect to training in real space. The accuracy of dimension judgment (e.g., distances from a virtual point to objects) was assessed to identify perception of scale. In Ceylan [ 28 ]’s study, students gained more benefits in perceiving the physical characteristics of a model (e.g., dimensions of the model) when using VR. The prediction of dimension, including length, width, height, and total used area, was evaluated using the percentile proximity of the actual size of the model. Additionally, open-ended questions were examined for their impressions of the models and their preference for using VR in design models.

Despite the considerable body of studies on immersive visualization technology over the last decades, few studies have examined how this emerging technology can improve the design process and facilitate the comprehension of design details in landscape architecture. Furthermore, to the best of our knowledge, no studies have explored the effectiveness of VR in learning design details based on students’ project outputs. Immersive VR environments equipped with high-resolution HMDs enable users to experience high realism, thereby providing a real-scale perception of spaces and high precision in material representation [ 8 , 29 ]. VR allows users to experience spaces from their own perspective by generating 3D spatial information on a full scale and presenting the illusion of depth and immersion [ 30 , 31 ]. Thus, enhanced awareness of the 3D elements and factors in design details through VR promotes spatial cognition, aiding users in making more informed design decisions [ 32 , 33 , 34 ] and identifying potential design defects [ 35 ].

This study proposes to integrate VR technology into landscape architecture design detail courses to enhance students' learning outcomes effectively. In the class final project, students are required to construct different types of wood structures in residential community pond areas and simulate their design using two different methods: (1) conventional way: 3D simulation in the semi-immersive environment (monitor display-based simulation), and (2) the proposed way: the fully immersive environment (VR HMD-based simulation). Our research team examines students’ work to determine how VR can enhance their understanding of learning landscape design detail by evaluating spatial realism, spatial scale, and the effectiveness of using emerging technology. To achieve our goal, this study examines three main research questions: (1) To what extent does students’ perception of realism in the design model differ between semi-immersive and fully immersive environments? (2) To what extent does students’ perception of scale in the design model differ between two different modalities? (3) How effective do students feel when implementing detailed designs in two different modalities?

2.1 Experiment design

2.1.1 overview of the research procedure.

The present study utilized students’ final project of LAR 3164: Design in Detail, a prerequisite course for construction documentation studio in the landscape architecture program at Virginia Tech in the United States. This course targets third-year undergraduates and encompasses theoretical and practical aspects of building construction education and its development for landscape architects. LAR 3164 aimed to teach students about landscape construction details and their impact on design, enhance their skills in traditional and digital construction methods, and explore the forces, properties, limitations, and mitigation strategies related to materials selection. We selected this course for our research experimental settings because students learn a wide range of practical detail designs for landscape construction.

This study had only 16 participants. A small sample size is common due to the nature of design education, wherein educators individually trained and guided each student through their semester-long project. However, the sample size is sufficient for the purpose of this project and in accordance with relevant literature [ 23 , 36 , 37 ]. Based on the outputs of each student’s final project work, we asked several questions to examine our research goals and objectives. In our survey consent form, we included the statement, “Research participation may not affect grades, recommendation letters, or other opportunities or decisions made by teacher-investigators” to ensure a non-oppressive environment for students. This study was approved by the Virginia Tech Institutional Review Board (IRB).

2.1.2 Student’s work procedures (Assignment instructions)

The aim of the final project in LAR 3164 is to enhance students’ understanding and skills in designing wooden structure models for the pond area in the apartment community in Blacksburg, Virginia. The site is frequented by students, faculty members, and seniors of the community for outdoor recreational activities, including walking and jogging. The area of the pond is approximately 70,143 square feet. Figure  1 shows a satellite and a digital base map of the pond area as given to the students. The students were tasked with designing and strategically placing three wood structural elements: a wooden boardwalk, a resting deck, and a wood overhead structure (e.g., pergola), taking into consideration the natural surroundings and the pond’s proximity.

figure 1

Site base map and satellite image

The project consisted of three main parts: site planning and design (master plan), the execution of detailed design drawings, 3D modeling and rendering, and the research experiment phase (see Fig.  2 ). In Phase 1, students analyzed the site, developed designs for the structural elements, and decided on the materials and structural configurations. In Phase 2, students translated their master plan design into 2D detail design drawings using AutoCAD. Footnote 1 The instructors provided feedback on their design structures and specifications. Phase 3 focused on 3D modeling using SketchUp Footnote 2 and Rhino Footnote 3 based on their 2D detailed drawings. Finally, the outcome of the 3D model design was rendered via Twinmotion EDU 2022.2.3. Footnote 4 The students’ work served as a preparation for the research experiment.

figure 2

A student’s work procedure for the final project

2.1.3 Survey methodology

For the research experiment, students imported their final 3D wood structure models into Twinmotion software to be used for the simulation experiment (see Fig.  3 ). The main idea of this experiment is to test the perception of realism, scale, and effectiveness on the 3D boardwalk simulation by comparing semi-immersive and fully immersive environments. We asked about the extent of realism of computer simulation in size, volume, depth, view, and texture using a 5-point Likert scale (1 = Nothing; 5 = Much) derived from Gómez-Tone, Bustamante Escapa [ 8 ] survey questionnaires. The survey questionnaires inquire about “how much realism gives space the perception of its real size, volume, depth, view, and texture respectively?”. A higher value for each metric indicates that participants perceive a greater sense of realism when observing their visualized models, while a lower value implies respondents perceive less realism in each metric. These questionnaires identify whether two different modalities create differences in the perception of realism among student participants.

figure 3

Final work samples of students

In terms of scale, we asked questions about the spatial scale regarding their simulated design models. Since students individually produce their own models and have different dimensions of their works, we cannot ask for the absolute size of design models to assess the participant’s sense of scale (e.g., how many feet wide do you think your simulated model is), as in previous studies. Furthermore, they were already well-informed about the dimensions of detailed drawings while working on the 3D modeling process. Thus, we measured the participants’ relative size of an object as compared to their reference by asking the extent of the degree to which students expected the scale of their boardwalk models. “On a scale of 1 to 5, where 1 indicates that the size of the simulated model is much smaller than your expectations and 5 indicates that the size of the simulated model (boardwalk) is much larger than your expectations in height, width, and length” using a 5-point Likert scale (1 = Very small; 5 = Very large). A higher value for each metric demonstrates that simulated models in each modality are larger than the participant’s expectations. A lower value implies that simulated models are smaller than their expectations.

The last part of the survey asked about the extent of effectiveness of computer simulation finding defects of scale, and structure in two different modalities using a 5-point Likert scale (1 = Strongly disagree; 5 = Strongly agree). The survey questionnaires include “Do you think this simulation helps you to find any inappropriate scale, material (texture) defects, overall errors, and defects of structure in your design model?”. A higher value of each metric depicts that each modality is effective in identifying errors in their models, while a lower value indicates that each modality is not effective. We also asked how computer simulation can help design solutions and decision-making processes using the same Likert scale. Only after a fully immersive environment experience did we ask open-ended questions about how VR can help in the process of design details: “How will VR assist you in making better decisions regarding the outcome of your design?” and “Provide two advantages of using a fully immersive environment (VR HMD-based) when simulating your model as compared to a semi-immersive environment (monitor display-based).”

2.1.4 Experimental procedure

The experiment was conducted in two consecutive parts: (1) a semi-immersive environment experience followed by a survey designed by the researchers and (2) testing the 3D model in a fully immersive VR environment experience followed by a survey including open-ended questions, as shown in Fig.  4 . Each student was allotted 30 min to participate in the entire research procedure. The semi-immersive environment experience entailed testing the project within a 2D monitor display-based semi-immersive environment setup. The students navigated their 3D models using mouse and keyboard controls via a wide 34-inch LG monitor display (2560 × 1080 resolution) for approximately 5 min. The students then completed a survey assessing their perception of realism, scale, and effectiveness of semi-immersive visualization technology.

figure 4

The flow of the research experiment

In the second part, the same 3D models were tested in a fully immersive VR environment using the HTC VIVE PRO VR system and HMD for 5 min. Twinmotion provides “VR mode” to enable users to navigate 3D models in fully immersive environments. The participants completed a survey specific to this aspect of the experiment. Additionally, two open-ended questions were included to explore how VR could enhance decision-making and the advantages of using VR for this project.

2.1.5 Participants’ demographics

A total of 16 students participated in this research at the end of the semester. Out of the 16 students, 9 were female, and 7 were male. The age groups ranged as follows: under 20 (6 students), 20–25 (8 students), 25–30 (1 student), and 30–35 (1 student). The vast majority of participants were white (10 students), followed by Asian (4 students), Latino (1 student), and one student did not respond. Among them, 12 students were undergraduates, and 4 were at the graduate level. Additionally, 10 out of the 16 students reported having VR experience before this experiment.

2.2 Data analysis

2.2.1 statistical analysis.

Our study design performed repeated measures of each participant’s response in semi-immersive and fully immersive environments. Since our data was not normally distributed, the non-parametric test is more suitable for examining our research questions. Therefore, the Wilcoxon signed-rank test was applied to examine the differences in participants’ perceptions between semi-immersive and fully immersive environments. The rank-order test method is more robust when the sample sizes are too small to meet the underlying assumption of continuous data (e.g., probability distributions) [ 38 ]. The minimum sample size threshold for non-parametric asymptotic tests is 16 [ 39 , 40 ]. Statistical analysis was performed using IBM SPSS Statistics 28. Under each three main research questions, we have sub-questions to test whether it has a statistically significant difference between two different environments. The Wilcoxon signed-rank test measures each sample twice to perform pairs of observations. The effect size (r) was computed as the absolute Z statistic divided by square root of the sample size. Cohen [ 41 ] noted that effect sizes are often classified as small (≈ 0.2), medium (≈ 0.5), or large (greater than or equal to 0.8).

3.1 Realism

The results of the Wilcoxon signed-rank test revealed a statistically significant difference between the scores of realism in semi-immersive and fully immersive environments. The fully immersive environment had a higher realism score in size than the semi-immersive environment, Z = − 2.714, p  = 0.007, with a medium effect size (r = 0.679). In terms of volume, the fully immersive environment had a higher realism score than the semi-immersive environment, Z = − 3.358, p  =  < 0.001, with a large effect size (r = 0.840). In addition, the fully immersive environment had a higher realism score in depth than semi-immersive environments, Z = − 3.217, p  = 0.001, with a large effect size (r = 0.804) (see Fig.  5 and Table  1 ).

figure 5

Results of the realism survey

The results of the Wilcoxon signed-rank test revealed a statistically significant difference between the scores in semi-immersive and fully immersive environments. The score for height was significantly greater in the fully immersive environment compared to the semi-immersive environment, Z = − 2.070, p  = 0.038, with a medium effect size (r = 0.518). The mean height value in the fully immersive environment was 3.38, indicating that participants perceived the height of the simulated model to be higher than they had anticipated. However, the two systems had no statistically significant difference in the perception of width and length. The score for width in the fully immersive environment was less than in the semi-immersive environment. The score for length in the fully immersive environment was greater than in the semi-immersive environment (see Fig.  6 and Table  2 ).

figure 6

Results of the scale survey

3.3 Effectiveness

The Wilcoxon signed-rank test results showed no statistically significant difference in identifying inappropriate scale, material defects, and structural defects between the semi-immersive and fully immersive environments. In addition, no statistically significant differences were found in assistances of creativity of details and design decision-making in the semi-immersive environment compared to the fully immersive environment. Although all variables were not statistically significant, most participants still found fully immersive environments effective in identifying design flaws and aiding in decision-making. (see Fig.  7 and Table  3 ).

figure 7

Results of the effectiveness survey

4 Discussion

This research attempts to uncover how immersive visualization technology can be effective in structural detail design in landscape architecture. To the best of our knowledge, no studies have examined how immersive visualization technology can promote landscape design details based on students’ design works. Expectedly, the results of the study showed that fully immersive environments could give students higher realism in size, volume, and depth while navigating their 3D models compared to semi-immersive environments. This is in line with previous findings that VR can give users a higher level of realistic experience [ 42 , 43 ]. Participants especially feel a considerable gap in realism in the volume and depth of their model in two different environments. The fully immersive environment is more likely to offer participants a higher realism by incorporating texture, shading, shadow, and lighting [ 44 ]. This outcome illustrates that VR can be useful because it promotes the comprehension of specific design features (e.g., layout, scale, and dimension) during the design review process [ 45 ].

Regarding the perception of scale, the mean values of height, width, and length were close to 3, indicating that the dimensions of simulated 3D models exhibited the intended scale in semi-immersive and fully immersive environments. When comparing the two systems, the Wilcoxon signed-rank test results revealed no statistically significant difference in the perceived width and length of their wood structure models. This is because students were familiar with the length and width of their model. When they conducted the detailed design drawing process, they referred to tables from textbooks that explained the maximum spans and distances of each beam, span, and post. Furthermore, the width and length of the wood structure model are also easily assessed with 2D drawings without 3D perspective drawings.

However, students perceived that the height of the simulated model was much higher than their expectations in fully immersive environments. One explanation is that the model’s height is not easily assessed in the semi-immersive environment because it cannot properly project topography with surrounding environments at eye level. In fully immersive environments, the height of the wood structure felt higher as VR gives more realism in the depth of the pond area and provides the real or natural scale [ 8 ]. In open-ended questions, students mentioned that scale was more recognized in VR simulation as they could observe objects in their eye level perspectives (i.e., their vision height in the real world) (Refer to Table  4 for comments provided by student H). Thus, their perception of height via semi-immersive environments might be inaccurate, whereas fully immersive environments might be closer enough to the real scale with a high accuracy. Taking this into consideration, students tended to design a lower height of wood structure than intended when building 3D models.

Surprisingly, the findings of this study revealed no significant difference in participants’ perception of the effectiveness of evaluating their design work in fully immersive and semi-immersive environments. This is partly because students went through their design while building a 3D model in SketchUp. During the design stage, they already acknowledged the possible defects in their model, including size, texture, and structure. Though the results showed no statistical difference, students were more likely to feel effective when they explored their design model in a fully immersive environment.

The answers derived from open-ended survey questions support the effectiveness of using VR in detail design for students. One main benefit highlighted by participants was the ability of VR to enhance their understanding of real scale and spatial relationships. By immersing themselves in a virtual environment, they could have a cognitive experience from a first-person perspective [ 34 ], gaining a better sense of the size and proportions of objects and spaces related to their structural design. Immersive visualization technology allowed them to make more informed decisions when designing detailed elements of their projects.

Second, the participants emphasized the value of VR in assessing material properties and texture through close examination as it gives a more realistic sensation [ 46 ]. This enabled them to make more accurate judgments about the suitability of materials and ensured their design aligned with their intended aesthetics and functional goals. Third, the participants highlighted the role of VR in identifying design issues and facilitating the iterative design process. They could easily spot misalignments, connections, and other potential flaws. These early detections of issues empowered them to make necessary adjustments before final submission. The study participants acknowledged the convenience of VR in bridging the gap between the 2D drawing works and the simulation of the 3D model [ 47 ]. This transition facilitated better decision-making about scale, materials, and overall site context.

After our experiment, we also asked students to revise their 3D drawings and renderings based on defects found in their VR experiences as an activity for students to reflect on their constructed models (see Fig.  8 ). One student observed several issues found in the original wood structure models while experiencing VR simulation, such as the structure’s scale, seating arrangements, and the leveling of the boardwalk with existing topography. Several drawing revisions were made to easily transition between the boardwalk and the existing path, enlarging the pergola, rearranging the seating, and utilizing plantings to break up the space. Throughout the VR simulation, students acknowledged that VR can effectively assist in identifying structural defects in their model and implementing modifications of drawings. This finding supplements a broader body of work that explores the efficacy of using VR in landscape design detail.

figure 8

Revision of a student’s work

4.1 Limitations and implications

Though this study highlights several implications for design educators, there are some limitations in our study. First, due to the class size and limited resources, only 16 students participated in our study. The small sample size can limit statistical power and lower the external validity of this study. Although we addressed this issue using a non-parametric test, we strongly recommend that future studies consider using a larger sample size to increase statistical power.

Second, although we had open-ended questions in our survey, we mainly relied on questionnaires utilizing the 5-point Likert scale to inquire about students’ perceptions of their 3D models. To identify more details, the semi-structured interview is highly encouraged for each participant.

Third, there is a technological limitation in the current VR HMD. Even though the VIVE Pro 2 has a high resolution (2448 × 2448 pixels per eye), some participants feel cybersickness (e.g., dizziness, nausea, and headaches) caused by sensory mismatch (i.e., a sense of self-movement in VR environments while physically stationary in the real word environments) [ 48 , 49 , 50 ]. This technological limitation may lead to variations in participants’ responses to the survey [ 48 ]. To address this fact, future studies need to employ a simulation sickness questionnaire (SSQ) to consider potential confounding effects [ 51 , 52 ].

Fourth, we were unable to evaluate how participants accurately provided the precise measurement of their respective wood structure models. As each participant designed their own 3D model, each wood structure model exhibited considerable variation in width, length, and height. Thus, we had no choice but to assess the perceived scale from participants rather than the actual measurement in this study. Future research might need to assign students wood structure drawings with identical dimensions like previous studies [ 26 , 27 , 28 ].

Lastly, this study did not consider the order of research procedures in our experiment. Participants experienced the 3D simulation in semi-immersive environments first, and afterward, they experienced the same 3D simulation in fully-immersive environments. Though several experimental studies found there was no main effects in the sequence of experiments [ 53 , 54 , 55 ], this sequential judgment might be susceptible to distortion, as two forms of contexture effects can influence each other [ 56 ]. To mitigate this effect, future studies should take into account this carryover effect by incorporating two different orders of research experiments.

The advantages of immersive VR for detail designs surpass those of the semi-immersive computer monitor experience. The ability to fully immerse oneself, understand scales more accurately, physically interact with space, and navigate easily contributes to a more realistic and comprehensive simulation experience. As VR technology continues to evolve, its integration into landscape construction design workflows holds great promise for improving decision-making, enhancing user experiences, and pushing the boundaries of design innovation. This study bridges industrial needs with educational demonstrations and provides some necessary suggestions for more practical design education.

5 Conclusion

Computer technology plays a significant role in assisting the design works for students. However, conventional 3D graphic software has a limitation in that it cannot produce realism, failing to detect defects in design outcomes. This study aimed to explore how VR technology can assist students with design details in landscape architecture. To achieve this goal, we asked students to design wood structure details such as boardwalks in residential pond areas. Based on their design works, 16 students were asked to answer survey questionnaires about the perception of realism, scale, and effectiveness of computer technology by comparing semi-immersive (e.g., monitor display-based) and fully immersive environments (e.g., VR HMD-based). Our findings showed that participants were more likely to feel a higher level of realism in size, volume, and depth in fully immersive environments than in semi-immersive environments. Furthermore, participants responded with varying perceptions of the scale, particularly with regard to the height of wood structure details, between the two different environments. Though there was no statistical significance, participants answered that a fully immersive environment could facilitate the creation of design details in landscape architecture. VR can help participants better understand the size and proportions inherent in their design objects. These findings show that emerging technology empowers students to elevate their creative design output by achieving greater precision and fidelity in their work.

Data availability

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the protection of privacy.

Code availability

Not applicable.

AutoCad 2023 is being used with an educational account at Virginia Tech.

SketchUp Studio is being purchased as the student version (Link: https://www.sketchup.com/plans-and-pricing#for-higher-education ).

Rhino 8 is being used with an educational account at Virginia Tech.

Twinmotion is available in the education edition (Link: https://www.twinmotion.com/en-US/license ).

Guney D. The importance of computer-aided courses in architectural education. Procedia Soc Behav Sci. 2015;176:757–65.

Article   Google Scholar  

Soliman S, Taha D, El Sayad Z. Architectural education in the digital age: computer applications: Between academia and practice. Alex Eng J. 2019;58(2):809–18.

Khiati S. CAD and 3D Visualization software in design education: is one package enough. J Eng Appl Sci. 2011;3(2):91–100.

Google Scholar  

Fakhry M, Kamel I, Abdelaal A. CAD using preference compared to hand drafting in architectural working drawings coursework. Ain Shams Eng J. 2021;12(3):3331–8.

Shiratuddin MF, Thabet W, Bowman D. Evaluating the effectiveness of virtual environment displays for reviewing construction 3D models. CONVR. 2004;2004:87–98.

Lindquist M, Maxim B, Proctor J, Dolins F. The effect of audio fidelity and virtual reality on the perception of virtual greenspace. Landsc Urban Plan. 2020;202: 103884.

Kamath RS, Dongale TD, Kamat RK. Development of virtual reality tool for creative learning in architectural education. Int J Qual Assur Eng Technol Educ IJQAETE. 2012;2(4):16–24.

Gómez-Tone HC, Bustamante Escapa J, Bustamante Escapa P, Martin-Gutierrez J. The drawing and perception of architectural spaces through immersive virtual reality. Sustainability. 2021;13(11):6223.

Camba JD, Soler JL, Contero M, editors. Immersive visualization technologies to facilitate multidisciplinary design education. In: Learning and collaboration technologies novel learning ecosystems: 4th International Conference, LCT 2017, Held as part of HCI international 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I 4. Springer; 2017.

Bryson S. Approaches to the successful design and implementation of VR applications. In: Earnshaw R, Vince J, Jones H, editors. Virtual reality applications. San Diego: Academic Press; 1995. p. 3–15.

Garrett B, Taverner T, Gromala D, Tao G, Cordingley E, Sun C. Virtual reality clinical research: promises and challenges. JMIR Serious Games. 2018;6(4): e10839.

Maples-Keller JL, Bunnell BE, Kim S-J, Rothbaum BO. The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harv Rev Psychiatry. 2017;25(3):103.

Schubert T, Friedmann F, Regenbrecht H. The experience of presence: Factor analytic insights. Presence Teleoperators Vir Environ. 2001;10(3):266–81.

van Brakel V, Barreda-Ángeles M, Hartmann T. Feelings of presence and perceived social support in social virtual reality platforms. Comput Hum Behav. 2023;139: 107523.

Isdale J. What is virtual reality. Virtual Reality Information Resources. 1998;4. http://www.isxcom/jisdale/WhatIsVrhtml .

Kizil M. Virtual reality applications in the Australian minerals industry. In: Application of computers and operations research in the minerals industries, South African. 2003. p. 569–74.

Ferreira A, Mavroidis C. Virtual reality and haptics for nanorobotics. IEEE Robot Autom Mag. 2006;13(3):78–92.

Kamińska D, Sapiński T, Wiak S, Tikk T, Haamer RE, Avots E, et al. Virtual reality and its applications in education: Survey. Information. 2019;10(10):318.

Portman ME, Natapov A, Fisher-Gewirtzman D. To go where no man has gone before: Virtual reality in architecture, landscape architecture and environmental planning. Comput Environ Urban Syst. 2015;54:376–84.

Abulrub A-HG, Attridge AN, Williams MA, editors. Virtual reality in engineering education: The future of creative learning. In: 2011 IEEE global engineering education conference (EDUCON); 2011. IEEEl 2011.

Pantelidis VS. Reasons to use virtual reality in education and training courses and a model to determine when to use virtual reality. Themes Sci Technol Educ. 2010;2(1–2):59–70.

Teklemariam HG, Kakati V, Das AK, editors. Application of VR technology in design education. In: DS 78: proceedings of the 16th international conference on engineering and product design education (E&PDE14), design education and human technology relations, University of Twente, The Netherlands, 04–0509 2014; 2014; 2014.

Bashabsheh AK, Alzoubi HH, Ali MZ. The application of virtual reality technology in architectural pedagogy for building constructions. Alex Eng J. 2019;58(2):713–23.

Valls F, Redondo E, Sánchez A, Fonseca D, Villagrasa S, Navarro I. Simulated environments in architecture education. In: Improving the student motivation. Recent advances in information systems and technologies, vol. 3. 5th ed. Springer; 2017.

Alatta RA, Freewan A. Investigating the effect of employing immersive virtual environment on enhancing spatial perception within design process. ArchNet-IJAR Int J Architect Res. 2017;11(2):219.

Paes D, Arantes E, Irizarry J. Immersive environment for improving the understanding of architectural 3D models: comparing user spatial perception between immersive and traditional virtual reality systems. Autom Constr. 2017;84:292–303.

Hou N, Nishina D, Sugita S, Jiang R, Kindaichi S, Oishi H, et al. Virtual reality space in architectural design education: learning effect of scale feeling. Build Environ. 2024;248: 111060.

Ceylan S, editor. Using virtual reality to improve visual recognition skills of first year architecture students: a comparative study. CSEDU, 2; 2020.

Herman L, Juřík V, Snopková D, Chmelík J, Ugwitz P, Stachoň Z, et al. A comparison of monoscopic and stereoscopic 3D visualizations: Effect on spatial planning in digital twins. Remote Sens. 2021;13(15):2976.

Azarby S, Rice A. Understanding the effects of virtual reality system usage on spatial perception: the potential impacts of immersive virtual reality on spatial design decisions. Sustainability. 2022;14(16):10326.

Kalisperis L, Muramoto K, Balakrishnan B, Nikolic D, Zikic N. Evaluating relative impact of virtual reality system variables on architectural design comprehension and presence. In: Proceedings of eCAADe2006, Volos, Greece; 2006.

George BH, Sleipness OR, Quebbeman A. Using virtual reality as a design input: impacts on collaboration in a university design studio setting. J Dig Landsc Architect. 2017;2:252–9.

Chamberlain BC. Crash course or course crash: Gaming, VR and a pedagogical approach. J Digit Landsc Arch. 2015;354.

Azarby S, Rice A. User performance in virtual reality environments: the capability of immersive virtual reality systems in enhancing user spatial awareness and producing consistent design results. Sustainability. 2022;14(21):14129.

Gómez-Tone HC, Martin-Gutierrez J, Bustamante-Escapa J, Bustamante-Escapa P. Spatial skills and perceptions of space: representing 2D drawings as 3D drawings inside immersive virtual reality. Appl Sci. 2021;11(4):1475.

Messner J, Yerrapathruni S, Baratta A, Whisker V, editors. Using virtual reality to improve construction engineering education. In: 2003 annual conference; 2003.

Azhar S, Kim J, Salman A, editors. Implementing virtual reality and mixed reality technologies in construction education: students’ perceptions and lessons learned. In: ICERI2018 proceedings; 2018: IATED.

Riffenburgh RH. Chapter 16—tests on ranked data. In: Riffenburgh RH, editor. Statistics in Medicine. 2nd ed. Burlington: Academic Press; 2006. p. 281–303.

Chapter   Google Scholar  

Siegel S, Castellan NJ. Nonparametric statistics for the behavioral sciences. 2nd ed. New York: McGraw-Hill; 1988.

Snedecor GW, Cochran WG. Statistical methods. 7th ed. Ames: Iowa State University Press; 1980.

Cohen J. Statistical power analysis for the behavioral sciences. Academic press; 2013.

Book   Google Scholar  

Newman M, Gatersleben B, Wyles KJ, Ratcliffe E. The use of virtual reality in environment experiences and the importance of realism. J Environ Psychol. 2022;79: 101733.

Hvass J, Larsen O, Vendelbo K, Nilsson N, Nordahl R, Serafin S, editors. Visual realism and presence in a virtual reality game. In: 2017 3DTV conference: the true vision-capture, transmission and display of 3D video (3DTV-CON). IEEE; 2017.

Nikolic D. Evaluating relative impact of virtual reality components detail and realism on spatial comprehension and presence. Pennsylvania State University; 2007.

Castronovo F, Nikolic D, Liu Y, Messner J, editors. An evaluation of immersive virtual reality systems for design reviews. In: Proceedings of the 13th international conference on construction applications of virtual reality; 2013: CONVR 2013) London, UK.

Pardo PJ, Suero MI, Pérez ÁL. Correlation between perception of color, shadows, and surface textures and the realism of a scene in virtual reality. JOSA A. 2018;35(4):B130–5.

Koller S, Ebert LC, Martinez RM, Sieberth T. Using virtual reality for forensic examinations of injuries. Forensic Sci Int. 2019;295:30–5.

Caserman P, Garcia-Agundez A, Gámez Zerban A, Göbel S. Cybersickness in current-generation virtual reality head-mounted displays: systematic review and outlook. Virtual Reality. 2021;25(4):1153–70.

Davis S, Nesbitt K, Nalivaiko E, editors. A systematic review of cybersickness. In: Proceedings of the 2014 conference on interactive entertainment; 2014.

Simón-Vicente L, Rodríguez-Cano S, Delgado-Benito V, Ausín-Villaverde V, Cubo Delgado E. Cybersickness. A systematic literature review of adverse effects related to virtual reality. Neurologia. 2022. https://doi.org/10.1016/j.nrl.2022.04.009 .

Bouchard S, Berthiaume M, Robillard G, Forget H, Daudelin-Peltier C, Renaud P, et al. Arguing in favor of revising the simulator sickness questionnaire factor structure when assessing side effects induced by immersions in virtual reality. Front Psych. 2021;12: 739742.

Tadeja SK, Lu Y, Rydlewicz M, Rydlewicz W, Bubas T, Kristensson PO. Exploring gestural input for engineering surveys of real-life structures in virtual reality using photogrammetric 3D models. Multimedia Tools Appl. 2021;80:1–20.

Tawil N, Sztuka IM, Pohlmann K, Sudimac S, Kühn S. The living space: psychological well-being and mental health in response to interiors presented in virtual reality. Int J Environ Res Public Health. 2021;18(23):12510.

Rashidian N, Giglio MC, Van Herzeele I, Smeets P, Morise Z, Alseidi A, et al. Effectiveness of an immersive virtual reality environment on curricular training for complex cognitive skills in liver surgery: a multicentric crossover randomized trial. HPB. 2022;24(12):2086–95.

Carrougher GJ, Hoffman HG, Nakamura D, Lezotte D, Soltani M, Leahy L, et al. The effect of virtual reality on pain and range of motion in adults with burn injuries. J Burn Care Res. 2009;30(5):785–91.

Ferris SJ, Kempton RA, Deary IJ, Austin EJ, Shorter MV. Carryover bias in visual assessment. Perception. 2001;30(11):1363–73.

Download references

Acknowledgements

The authors thank the Center for Excellence in Teaching and Learning at Virginia Tech for funding this research. This study was supported by the “Scholarship of Teaching and Learning Grants.” We also express our gratitude to the students who participated in this research.

The authors thank the Center for Excellence in Teaching and Learning at Virginia Tech for funding this research. This study was supported by the “Scholarship of Teaching and Learning Grants.”

Author information

Authors and affiliations.

Landscape Architecture Program, Virginia Tech, 121 Burruss Hall, 800 Drillfield Drive, Blacksburg, VA, 24061, USA

Jaeyoung Ha & Kawthar Alrayyan

Department of Technology Systems, East Carolina University, Greenville, NC, 27858, USA

M. M. Lekhon Alam

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, Jaeyoung Ha, Kawthar Alrayyan, M. M. Lekhon Alam; methodology, Jaeyoung Ha, Kawthar Alrayyan; software, Jaeyoung Ha; formal analysis, Jaeyoung Ha; data curation, Jaeyoung Ha, Kawthar Alrayyan; writing—original draft preparation, Jaeyoung Ha, Kawthar Alrayyan; writing—review & editing, Jaeyoung Ha, M. M. Lekhon Alam.

Corresponding author

Correspondence to Jaeyoung Ha .

Ethics declarations

Consent to participate.

Informed consent was obtained from all individual participants included in the study.

Competing interests

The authors have no financial disclosures to declare and no conflicts of interest to report.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ha, J., Alrayyan, K. & Alam, M.M.L. Virtual reality technology for learning detailed design in landscape architecture. Discov Educ 3 , 39 (2024). https://doi.org/10.1007/s44217-024-00123-9

Download citation

Received : 27 November 2023

Accepted : 15 April 2024

Published : 23 April 2024

DOI : https://doi.org/10.1007/s44217-024-00123-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Design education
  • Virtual reality (VR)
  • Architectural construction
  • 3D modeling
  • Scale perception
  • Find a journal
  • Publish with us
  • Track your research

Virtual Reality’s Main Benefits Essay

Attention getter.

Virtual reality is gradually becoming one of the mainstream trends in the contemporary world. Many technology manufacturers are now focused on the production of their own VR devices. Commonly, such devices are viewed as tools for entertainment. However, as pointed out in an article published in Time Magazine in 2013, virtual reality has a lot of benefits to offer to the fields of healthcare, military, education, gaming, and simulation, to name a few (Terhakopian par 1-15).

Introduction to the Topic

The rapid development and the growing popularity of virtual reality raise a logical interest concerning the advantages and disadvantages that are related to the application of this new technology in various spheres of knowledge and activities.

Relevance Statement

Did you know that just like any other modern technologies (smartphones, computers, mp3 players, for example), the technology of virtual reality is likely to enter our daily life in the near future? That is why it is important to explore what benefits and concerns are associated with its use today.

Credibility Statement

The information for this speech was taken from the most credible sources such as books, Time Magazine , and The Guardian – a world-renowned newspaper and portal.

Thesis Statement

Virtual reality is a fast-developing technology that carries a multitude of benefits for such professional fields as healthcare, education, military, versatile training, psychology, psychiatry, and entertainment; however, the technology is currently at the stage of development and has a set of weaknesses that prevent it from being widely applied.

First of all, in this speech, I will focus on the introduction of the technology of virtual reality and its description. Further, I will move on to the exploration of its benefits (current as well as potential) and the spheres where the advantages of virtual reality technology can be applied. Finally, I will move to the weaknesses of the technology, its drawbacks, and the areas that still need development.

Introduction

Sherman and Craig, the authors of the book titled Understanding Virtual Reality: Interface, Application, and Design that was published in 2003, pointed out that the concept of virtual reality is tightly connected to the notion of a virtual world (7).

Did you know that the virtual world is something we face every single day? It can be encountered when you watch a film, a play in a theater, or read a book. The imaginary stories happening on the stage, screen, or in one’s imagination are defined as the virtual worlds. When it comes to virtual reality, Sherman and Craig emphasized that the major element that distinguishes it from the virtual world is immersion (7). In other words, in the virtual reality, the viewer is placed within the fictional scenario and can interact with it on a physical as well as mental level.

Benefits of Virtual Reality

Did you know that it is the physical and mental types of immersion that serve as the primary sources of all the benefits that the technology of virtual reality carries for versatile spheres of knowledge and practice? Entertainment is, probably, the first area that comes to mind when virtual reality is mentioned. In the article published in The Guardian in 2016, it is mentioned that many of the modern leading technology manufacturers are working on the creation of the VR devices for the purposes of gaming (Davis par. 1-3).

Besides, the benefits of VR are appreciated in a variety of other fields. As it was reported at the Edutainment Conference of 2011 that was held in Taiwan, one of the areas of practice that benefit from VR is psychology where it is used for work with the patients suffering from posttraumatic stress disorder (Chang et al. 3). In the article published in 2005, Rizzo and Kim noted that technology allows re-creating potentially traumatic scenarios for the patients and addressing their mental health issues by means of conditioning, visualization, and rationalization.

In addition, in their book titled Virtual Reality, Training’s Future? published in 2013, Seidel and Chatelier provided a discussion of the advantages the VR technology offers to the field of education due to its capacity to enhance learning and practically take it to a higher level where the students can combine theory and practice, be placed in various educational situations to demonstrate decision-making, and have a hands-on experience with new equipment (31). In that way, VR can help advance the learning of medical students, technicians, machinery operators, military employees in training, to name a few.

Weaknesses of Virtual Reality Technology

In the article published in The Guardian in 2016, it is noted that the modern VR devices used in different fields have a variety of adverse side-effects such as dizziness, nausea, headaches, seizures, and even troubled hand-eye coordination (Davis par. 3). Moreover, there may be a number of long-term effects that have not been revealed yet. In the same article, it is stated that the prolonged use of modern VR devices is known to cause a significant level of discomfort (par. 3-7). In that way, there is a concern that the long-term effects of the use of VR devices may be quite dangerous.

Virtual reality is a rapidly developing and promising technology. Potentially, it could be applied in a wide range of fields such as education and learning, military, healthcare, psychology, and entertainment. However, it is currently at the stage of development and has multiple disadvantages and side effects (adverse short-term impacts on the health of the users and unresearched long-term impacts).

Works Cited

Chang, Maiga et al. Edutainment Technologies . Educational Games and Virtual Reality/Augmented Reality Applications: 6th International Conference on E-learning and Games, Edutainment 2011, Taipei, Taiwan, Proceedings . Springer Science & Business Media, 2011.

Davis, Nicola. “Long-Term Effects of Virtual Reality Use Need More Research, Say Scientists.” The Guardian . 2016, n.p.

Seidel, Robert J. and Paul R. Chatelier. Virtual Reality, Training’s Future?: Perspectives on Virtual Reality and Related Emerging Technologies . Springer Science & Business Media, 2013.

Sherman, William R. and Alan B. Craig. Understanding Virtual Reality: Interface, Application, and Design . Morgan Kaufmann, 2003.

Rizzo, Albert and Gerard Kim. “A SWOT Analysis of the Field of Virtual Reality Rehabilitation and Therapy.” Presence , vol. 14, no. 2, 2005, pp. 119–146

Terhakopian, Artin. “Embracing Virtual Reality.” Time . 2013, n.p.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, October 6). Virtual Reality's Main Benefits. https://ivypanda.com/essays/virtual-realitys-main-benefits/

"Virtual Reality's Main Benefits." IvyPanda , 6 Oct. 2020, ivypanda.com/essays/virtual-realitys-main-benefits/.

IvyPanda . (2020) 'Virtual Reality's Main Benefits'. 6 October.

IvyPanda . 2020. "Virtual Reality's Main Benefits." October 6, 2020. https://ivypanda.com/essays/virtual-realitys-main-benefits/.

1. IvyPanda . "Virtual Reality's Main Benefits." October 6, 2020. https://ivypanda.com/essays/virtual-realitys-main-benefits/.

Bibliography

IvyPanda . "Virtual Reality's Main Benefits." October 6, 2020. https://ivypanda.com/essays/virtual-realitys-main-benefits/.

  • The Opportunities of VR Technology for Life
  • AR and VR for Maritime Industry Training
  • Baptism By Immersion:A Discourse On The Merits Of Baptism By Immersion
  • “Walking Away” by Craig David: Spiritual Communication Brought by the Song
  • Scholar VR: Virtual Reality Planning Service Studio
  • Imagineering Myths About Virtual Reality
  • How Can the Implementation of VR Technologies That Allow Viewing the Property Improve Customer Satisfaction?
  • Virtual Reality Ride Experience at Disneyland Florida
  • Virtual Reality: A Powerful New Technology for Filming
  • Craig’s Crocodiles, Inc. Accounting
  • Virtual Reality' Sports Training System Working Steps
  • Virtual Reality Technology in Soccer Training
  • Virtual Reality Technology in Referee Training
  • The Manifestation of Technological Determinism
  • CCD Camera vs Tube Camera Comparison
  • Skip to main content
  • Keyboard shortcuts for audio player

Meta unveils new virtual reality headsets — and a plan for their use in classrooms

Ayesha Rascoe, photographed for NPR, 2 May 2022, in Washington DC. Photo by Mike Morgan for NPR.

Ayesha Rascoe

NPR's Ayesha Rascoe speaks to Nick Clegg, president of global affairs at Meta about the company's new virtual reality headsets and Meta's plans to have the headsets used in classrooms.

AYESHA RASCOE, HOST:

Facebook's parent company, Meta, has a new educational product for their Quest virtual reality headset, intended to go along with third-party educational apps. That's right - this one's headed to the classroom. Now, the headsets, which costs around $300 and are aimed at students who are 13 and older, are already in some schools. Meta's president of global affairs, Nick Clegg, thinks these headsets can engage students by immersing them into virtual environments - study ancient Rome by walking through ancient Rome, dinosaurs by walking among dinosaurs. We've seen Meta's commercials. I asked him, though, if headsets were an answer for students struggling with reading or math, areas where test scores have been at their lowest level in decades.

NICK CLEGG: I was reading a study the other day by I think it's Pricewaterhouse, PwC, who said that the learners that they'd spoken to, who'd been learning in virtual reality, said that they were 150% more engaged during classes than they otherwise would be. And Morehouse College reported much higher average final test scores for students learning in VR than from traditional or even traditional online methods. This isn't just about kind of academic learning. This is also about practical education. So for instance, in Tulsa, there is a welding school where welders of all levels are using VR to upskill their welding, you know, certification, their welding training. So I think there are lots and lots of different applications that educators and teachers are telling us at this very sort of nascent stage of the technology that they're using.

RASCOE: What do you say to those who will be critical of Meta in this space, given Meta's record of creating and marketing social media tools to children and teens that are addictive, and that research shows can have a real negative impact on kids' mental health? Why should Meta or its products be trusted in a classroom?

CLEGG: I don't think it's about whether you do or don't trust a company like Meta. It's do you or don't you trust the judgment of the teacher in the classroom? And we are building these tools so it is entirely controlled by the teacher. It's not controlled by us. It's the teacher that decides whether the headset is used. It's the teacher that decides what the content is on the headset. Students won't be able to access the Meta Quest store. They won't be able to access social media apps and social experiences on the Meta platform.

RASCOE: Separately, since I have you here - we're in an election year. There are major concerns about deepfakes and altered media spreading misinformation online. Meta announced starting next month, it will label AI-generated content and will also label any digitally altered media, AI or not, that it feels poses a particularly high risk of materially deceiving the public on a matter of importance. When it comes to election-related content, why just label the content and not remove it completely if it poses a risk of deceiving the public?

CLEGG: Oh, no. We will continue, of course, to remove content that breaks our rules. It doesn't matter whether it's synthetic or whether it's by a human being. We disable networks of fake accounts. We expect people, if they're going to use AI to produce political ads, to declare that. And if they don't, and they repeatedly fall foul of our rules, we won't allow them to run ads. But we have to work across the industry.

RASCOE: Well, that brings me to this question, because researchers at the New York University Stern Center for Business and Human Rights - they released a report this year arguing that it's not the creation of AI content that's really a threat to election security, but the distribution of, quote, "false, hateful and violent content" via social media platforms. They argue that companies like Meta need to add more humans to content moderation. They need to fund more outside fact-checkers and institute circuit breakers to slow the spread of certain viral posts, that it's really about the distribution of the content versus, like, how it's created by AI or whatever. What is your response to that?

CLEGG: Well, we as Meta so happen to have by far the world's largest network of fact-checkers, over 100 of them around the world, working in over 70 languages. If you look, for instance, at the prevalence of hate speech on Facebook now, what does prevalence mean? That means the percentage of hate speech as a percentage of the total amount of content on Facebook. It's down to as low as 0.01%. And by the way, that's not just my statistic or the statistic from Meta. That's actually a independently vetted statistic.

RASCOE: But you said you have 100 fact-checkers. I mean, there are millions and millions of posts. So is that something where you need more content moderators, you need more fact-checkers - is that something that Meta would consider in a pivotal election year?

CLEGG: Well, as I say, we constantly expand the number of fact-checkers we have. We'll never be perfect. The internet is a big open landscape of content, but I think we are a completely different company now than we were, for instance, back in 2016 at the time of the Russian interference in the U.S. elections then.

RASCOE: That's Nick Clegg, Meta's president for global affairs. Thanks for joining us.

CLEGG: Thank you.

Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

ORIGINAL RESEARCH article

The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature.

\r\nPietro Cipresso,*

  • 1 Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano, Milan, Italy
  • 2 Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
  • 3 Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain

The recent appearance of low cost virtual reality (VR) technologies – like the Oculus Rift, the HTC Vive and the Sony PlayStation VR – and Mixed Reality Interfaces (MRITF) – like the Hololens – is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation. However, the history of VR technology is longer than it may seem: the concept of VR was formulated in the 1960s and the first commercial VR tools appeared in the late 1980s. For this reason, during the last 20 years, 100s of researchers explored the processes, effects, and applications of this technology producing 1000s of scientific papers. What is the outcome of this significant research work? This paper wants to provide an answer to this question by exploring, using advanced scientometric techniques, the existing research corpus in the field. We collected all the existent articles about VR in the Web of Science Core Collection scientific database, and the resultant dataset contained 21,667 records for VR and 9,944 for augmented reality (AR). The bibliographic record contained various fields, such as author, title, abstract, country, and all the references (needed for the citation analysis). The network and cluster analysis of the literature showed a composite panorama characterized by changes and evolutions over the time. Indeed, whether until 5 years ago, the main publication media on VR concerned both conference proceeding and journals, more recently journals constitute the main medium of communication. Similarly, if at first computer science was the leading research field, nowadays clinical areas have increased, as well as the number of countries involved in VR research. The present work discusses the evolution and changes over the time of the use of VR in the main areas of application with an emphasis on the future expected VR’s capacities, increases and challenges. We conclude considering the disruptive contribution that VR/AR/MRITF will be able to get in scientific fields, as well in human communication and interaction, as already happened with the advent of mobile phones by increasing the use and the development of scientific applications (e.g., in clinical areas) and by modifying the social communication and interaction among people.

Introduction

In the last 5 years, virtual reality (VR) and augmented reality (AR) have attracted the interest of investors and the general public, especially after Mark Zuckerberg bought Oculus for two billion dollars ( Luckerson, 2014 ; Castelvecchi, 2016 ). Currently, many other companies, such as Sony, Samsung, HTC, and Google are making huge investments in VR and AR ( Korolov, 2014 ; Ebert, 2015 ; Castelvecchi, 2016 ). However, if VR has been used in research for more than 25 years, and now there are 1000s of papers and many researchers in the field, comprising a strong, interdisciplinary community, AR has a more recent application history ( Burdea and Coiffet, 2003 ; Kim, 2005 ; Bohil et al., 2011 ; Cipresso and Serino, 2014 ; Wexelblat, 2014 ). The study of VR was initiated in the computer graphics field and has been extended to several disciplines ( Sutherland, 1965 , 1968 ; Mazuryk and Gervautz, 1996 ; Choi et al., 2015 ). Currently, videogames supported by VR tools are more popular than the past, and they represent valuables, work-related tools for neuroscientists, psychologists, biologists, and other researchers as well. Indeed, for example, one of the main research purposes lies from navigation studies that include complex experiments that could be done in a laboratory by using VR, whereas, without VR, the researchers would have to go directly into the field, possibly with limited use of intervention. The importance of navigation studies for the functional understanding of human memory in dementia has been a topic of significant interest for a long time, and, in 2014, the Nobel Prize in “Physiology or Medicine” was awarded to John M. O’Keefe, May-Britt Moser, and Edvard I. Moser for their discoveries of nerve cells in the brain that enable a sense of place and navigation. Journals and magazines have extended this knowledge by writing about “the brain GPS,” which gives a clear idea of the mechanism. A huge number of studies have been conducted in clinical settings by using VR ( Bohil et al., 2011 ; Serino et al., 2014 ), and Nobel Prize winner, Edvard I. Moser commented about the use of VR ( Minderer et al., 2016 ), highlighting its importance for research and clinical practice. Moreover, the availability of free tools for VR experimental and computational use has made it easy to access any field ( Riva et al., 2011 ; Cipresso, 2015 ; Brown and Green, 2016 ; Cipresso et al., 2016 ).

Augmented reality is a more recent technology than VR and shows an interdisciplinary application framework, in which, nowadays, education and learning seem to be the most field of research. Indeed, AR allows supporting learning, for example increasing-on content understanding and memory preservation, as well as on learning motivation. However, if VR benefits from clear and more definite fields of application and research areas, AR is still emerging in the scientific scenarios.

In this article, we present a systematic and computational analysis of the emerging interdisciplinary VR and AR fields in terms of various co-citation networks in order to explore the evolution of the intellectual structure of this knowledge domain over time.

Virtual Reality Concepts and Features

The concept of VR could be traced at the mid of 1960 when Ivan Sutherland in a pivotal manuscript attempted to describe VR as a window through which a user perceives the virtual world as if looked, felt, sounded real and in which the user could act realistically ( Sutherland, 1965 ).

Since that time and in accordance with the application area, several definitions have been formulated: for example, Fuchs and Bishop (1992) defined VR as “real-time interactive graphics with 3D models, combined with a display technology that gives the user the immersion in the model world and direct manipulation” ( Fuchs and Bishop, 1992 ); Gigante (1993) described VR as “The illusion of participation in a synthetic environment rather than external observation of such an environment. VR relies on a 3D, stereoscopic head-tracker displays, hand/body tracking and binaural sound. VR is an immersive, multi-sensory experience” ( Gigante, 1993 ); and “Virtual reality refers to immersive, interactive, multi-sensory, viewer-centered, 3D computer generated environments and the combination of technologies required building environments” ( Cruz-Neira, 1993 ).

As we can notice, these definitions, although different, highlight three common features of VR systems: immersion, perception to be present in an environment, and interaction with that environment ( Biocca, 1997 ; Lombard and Ditton, 1997 ; Loomis et al., 1999 ; Heeter, 2000 ; Biocca et al., 2001 ; Bailenson et al., 2006 ; Skalski and Tamborini, 2007 ; Andersen and Thorpe, 2009 ; Slater, 2009 ; Sundar et al., 2010 ). Specifically, immersion concerns the amount of senses stimulated, interactions, and the reality’s similarity of the stimuli used to simulate environments. This feature can depend on the properties of the technological system used to isolate user from reality ( Slater, 2009 ).

Higher or lower degrees of immersion can depend by three types of VR systems provided to the user:

• Non-immersive systems are the simplest and cheapest type of VR applications that use desktops to reproduce images of the world.

• Immersive systems provide a complete simulated experience due to the support of several sensory outputs devices such as head mounted displays (HMDs) for enhancing the stereoscopic view of the environment through the movement of the user’s head, as well as audio and haptic devices.

• Semi-immersive systems such as Fish Tank VR are between the two above. They provide a stereo image of a three dimensional (3D) scene viewed on a monitor using a perspective projection coupled to the head position of the observer ( Ware et al., 1993 ). Higher technological immersive systems have showed a closest experience to reality, giving to the user the illusion of technological non-mediation and feeling him or her of “being in” or present in the virtual environment ( Lombard and Ditton, 1997 ). Furthermore, higher immersive systems, than the other two systems, can give the possibility to add several sensory outputs allowing that the interaction and actions were perceived as real ( Loomis et al., 1999 ; Heeter, 2000 ; Biocca et al., 2001 ).

Finally, the user’s VR experience could be disclosed by measuring presence, realism, and reality’s levels. Presence is a complex psychological feeling of “being there” in VR that involves the sensation and perception of physical presence, as well as the possibility to interact and react as if the user was in the real world ( Heeter, 1992 ). Similarly, the realism’s level corresponds to the degree of expectation that the user has about of the stimuli and experience ( Baños et al., 2000 , 2009 ). If the presented stimuli are similar to reality, VR user’s expectation will be congruent with reality expectation, enhancing VR experience. In the same way, higher is the degree of reality in interaction with the virtual stimuli, higher would be the level of realism of the user’s behaviors ( Baños et al., 2000 , 2009 ).

From Virtual to Augmented Reality

Looking chronologically on VR and AR developments, we can trace the first 3D immersive simulator in 1962, when Morton Heilig created Sensorama, a simulated experience of a motorcycle running through Brooklyn characterized by several sensory impressions, such as audio, olfactory, and haptic stimuli, including also wind to provide a realist experience ( Heilig, 1962 ). In the same years, Ivan Sutherland developed The Ultimate Display that, more than sound, smell, and haptic feedback, included interactive graphics that Sensorama didn’t provide. Furthermore, Philco developed the first HMD that together with The Sword of Damocles of Sutherland was able to update the virtual images by tracking user’s head position and orientation ( Sutherland, 1965 ). In the 70s, the University of North Carolina realized GROPE, the first system of force-feedback and Myron Krueger created VIDEOPLACE an Artificial Reality in which the users’ body figures were captured by cameras and projected on a screen ( Krueger et al., 1985 ). In this way two or more users could interact in the 2D-virtual space. In 1982, the US’ Air Force created the first flight simulator [Visually Coupled Airbone System Simulator (VCASS)] in which the pilot through an HMD could control the pathway and the targets. Generally, the 80’s were the years in which the first commercial devices began to emerge: for example, in 1985 the VPL company commercialized the DataGlove, glove sensors’ equipped able to measure the flexion of fingers, orientation and position, and identify hand gestures. Another example is the Eyephone, created in 1988 by the VPL Company, an HMD system for completely immerging the user in a virtual world. At the end of 80’s, Fake Space Labs created a Binocular-Omni-Orientational Monitor (BOOM), a complex system composed by a stereoscopic-displaying device, providing a moving and broad virtual environment, and a mechanical arm tracking. Furthermore, BOOM offered a more stable image and giving more quickly responses to movements than the HMD devices. Thanks to BOOM and DataGlove, the NASA Ames Research Center developed the Virtual Wind Tunnel in order to research and manipulate airflow in a virtual airplane or space ship. In 1992, the Electronic Visualization Laboratory of the University of Illinois created the CAVE Automatic Virtual Environment, an immersive VR system composed by projectors directed on three or more walls of a room.

More recently, many videogames companies have improved the development and quality of VR devices, like Oculus Rift, or HTC Vive that provide a wider field of view and lower latency. In addition, the actual HMD’s devices can be now combined with other tracker system as eye-tracking systems (FOVE), and motion and orientation sensors (e.g., Razer Hydra, Oculus Touch, or HTC Vive).

Simultaneously, at the beginning of 90’, the Boing Corporation created the first prototype of AR system for showing to employees how set up a wiring tool ( Carmigniani et al., 2011 ). At the same time, Rosenberg and Feiner developed an AR fixture for maintenance assistance, showing that the operator performance enhanced by added virtual information on the fixture to repair ( Rosenberg, 1993 ). In 1993 Loomis and colleagues produced an AR GPS-based system for helping the blind in the assisted navigation through adding spatial audio information ( Loomis et al., 1998 ). Always in the 1993 Julie Martin developed “Dancing in Cyberspace,” an AR theater in which actors interacted with virtual object in real time ( Cathy, 2011 ). Few years later, Feiner et al. (1997) developed the first Mobile AR System (MARS) able to add virtual information about touristic buildings ( Feiner et al., 1997 ). Since then, several applications have been developed: in Thomas et al. (2000) , created ARQuake, a mobile AR video game; in 2008 was created Wikitude that through the mobile camera, internet, and GPS could add information about the user’s environments ( Perry, 2008 ). In 2009 others AR applications, like AR Toolkit and SiteLens have been developed in order to add virtual information to the physical user’s surroundings. In 2011, Total Immersion developed D’Fusion, and AR system for designing projects ( Maurugeon, 2011 ). Finally, in 2013 and 2015, Google developed Google Glass and Google HoloLens, and their usability have begun to test in several field of application.

Virtual Reality Technologies

Technologically, the devices used in the virtual environments play an important role in the creation of successful virtual experiences. According to the literature, can be distinguished input and output devices ( Burdea et al., 1996 ; Burdea and Coiffet, 2003 ). Input devices are the ones that allow the user to communicate with the virtual environment, which can range from a simple joystick or keyboard to a glove allowing capturing finger movements or a tracker able to capture postures. More in detail, keyboard, mouse, trackball, and joystick represent the desktop input devices easy to use, which allow the user to launch continuous and discrete commands or movements to the environment. Other input devices can be represented by tracking devices as bend-sensing gloves that capture hand movements, postures and gestures, or pinch gloves that detect the fingers movements, and trackers able to follow the user’s movements in the physical world and translate them in the virtual environment.

On the contrary, the output devices allow the user to see, hear, smell, or touch everything that happens in the virtual environment. As mentioned above, among the visual devices can be found a wide range of possibilities, from the simplest or least immersive (monitor of a computer) to the most immersive one such as VR glasses or helmets or HMD or CAVE systems.

Furthermore, auditory, speakers, as well as haptic output devices are able to stimulate body senses providing a more real virtual experience. For example, haptic devices can stimulate the touch feeling and force models in the user.

Virtual Reality Applications

Since its appearance, VR has been used in different fields, as for gaming ( Zyda, 2005 ; Meldrum et al., 2012 ), military training ( Alexander et al., 2017 ), architectural design ( Song et al., 2017 ), education ( Englund et al., 2017 ), learning and social skills training ( Schmidt et al., 2017 ), simulations of surgical procedures ( Gallagher et al., 2005 ), assistance to the elderly or psychological treatments are other fields in which VR is bursting strongly ( Freeman et al., 2017 ; Neri et al., 2017 ). A recent and extensive review of Slater and Sanchez-Vives (2016) reported the main VR application evidences, including weakness and advantages, in several research areas, such as science, education, training, physical training, as well as social phenomena, moral behaviors, and could be used in other fields, like travel, meetings, collaboration, industry, news, and entertainment. Furthermore, another review published this year by Freeman et al. (2017) focused on VR in mental health, showing the efficacy of VR in assessing and treating different psychological disorders as anxiety, schizophrenia, depression, and eating disorders.

There are many possibilities that allow the use of VR as a stimulus, replacing real stimuli, recreating experiences, which in the real world would be impossible, with a high realism. This is why VR is widely used in research on new ways of applying psychological treatment or training, for example, to problems arising from phobias (agoraphobia, phobia to fly, etc.) ( Botella et al., 2017 ). Or, simply, it is used like improvement of the traditional systems of motor rehabilitation ( Llorens et al., 2014 ; Borrego et al., 2016 ), developing games that ameliorate the tasks. More in detail, in psychological treatment, Virtual Reality Exposure Therapy (VRET) has showed its efficacy, allowing to patients to gradually face fear stimuli or stressed situations in a safe environment where the psychological and physiological reactions can be controlled by the therapist ( Botella et al., 2017 ).

Augmented Reality Concept

Milgram and Kishino (1994) , conceptualized the Virtual-Reality Continuum that takes into consideration four systems: real environment, augmented reality (AR), augmented virtuality, and virtual environment. AR can be defined a newer technological system in which virtual objects are added to the real world in real-time during the user’s experience. Per Azuma et al. (2001) an AR system should: (1) combine real and virtual objects in a real environment; (2) run interactively and in real-time; (3) register real and virtual objects with each other. Furthermore, even if the AR experiences could seem different from VRs, the quality of AR experience could be considered similarly. Indeed, like in VR, feeling of presence, level of realism, and the degree of reality represent the main features that can be considered the indicators of the quality of AR experiences. Higher the experience is perceived as realistic, and there is congruence between the user’s expectation and the interaction inside the AR environments, higher would be the perception of “being there” physically, and at cognitive and emotional level. The feeling of presence, both in AR and VR environments, is important in acting behaviors like the real ones ( Botella et al., 2005 ; Juan et al., 2005 ; Bretón-López et al., 2010 ; Wrzesien et al., 2013 ).

Augmented Reality Technologies

Technologically, the AR systems, however various, present three common components, such as a geospatial datum for the virtual object, like a visual marker, a surface to project virtual elements to the user, and an adequate processing power for graphics, animation, and merging of images, like a pc and a monitor ( Carmigniani et al., 2011 ). To run, an AR system must also include a camera able to track the user movement for merging the virtual objects, and a visual display, like glasses through that the user can see the virtual objects overlaying to the physical world. To date, two-display systems exist, a video see-through (VST) and an optical see-though (OST) AR systems ( Botella et al., 2005 ; Juan et al., 2005 , 2007 ). The first one, disclosures virtual objects to the user by capturing the real objects/scenes with a camera and overlaying virtual objects, projecting them on a video or a monitor, while the second one, merges the virtual object on a transparent surface, like glasses, through the user see the added elements. The main difference between the two systems is the latency: an OST system could require more time to display the virtual objects than a VST system, generating a time lag between user’s action and performance and the detection of them by the system.

Augmented Reality Applications

Although AR is a more recent technology than VR, it has been investigated and used in several research areas such as architecture ( Lin and Hsu, 2017 ), maintenance ( Schwald and De Laval, 2003 ), entertainment ( Ozbek et al., 2004 ), education ( Nincarean et al., 2013 ; Bacca et al., 2014 ; Akçayır and Akçayır, 2017 ), medicine ( De Buck et al., 2005 ), and psychological treatments ( Juan et al., 2005 ; Botella et al., 2005 , 2010 ; Bretón-López et al., 2010 ; Wrzesien et al., 2011a , b , 2013 ; see the review Chicchi Giglioli et al., 2015 ). More in detail, in education several AR applications have been developed in the last few years showing the positive effects of this technology in supporting learning, such as an increased-on content understanding and memory preservation, as well as on learning motivation ( Radu, 2012 , 2014 ). For example, Ibáñez et al. (2014) developed a AR application on electromagnetism concepts’ learning, in which students could use AR batteries, magnets, cables on real superficies, and the system gave a real-time feedback to students about the correctness of the performance, improving in this way the academic success and motivation ( Di Serio et al., 2013 ). Deeply, AR system allows the possibility to learn visualizing and acting on composite phenomena that traditionally students study theoretically, without the possibility to see and test in real world ( Chien et al., 2010 ; Chen et al., 2011 ).

As well in psychological health, the number of research about AR is increasing, showing its efficacy above all in the treatment of psychological disorder (see the reviews Baus and Bouchard, 2014 ; Chicchi Giglioli et al., 2015 ). For example, in the treatment of anxiety disorders, like phobias, AR exposure therapy (ARET) showed its efficacy in one-session treatment, maintaining the positive impact in a follow-up at 1 or 3 month after. As VRET, ARET provides a safety and an ecological environment where any kind of stimulus is possible, allowing to keep control over the situation experienced by the patients, gradually generating situations of fear or stress. Indeed, in situations of fear, like the phobias for small animals, AR applications allow, in accordance with the patient’s anxiety, to gradually expose patient to fear animals, adding new animals during the session or enlarging their or increasing the speed. The various studies showed that AR is able, at the beginning of the session, to activate patient’s anxiety, for reducing after 1 h of exposition. After the session, patients even more than to better manage animal’s fear and anxiety, ware able to approach, interact, and kill real feared animals.

Materials and Methods

Data collection.

The input data for the analyses were retrieved from the scientific database Web of Science Core Collection ( Falagas et al., 2008 ) and the search terms used were “Virtual Reality” and “Augmented Reality” regarding papers published during the whole timespan covered.

Web of science core collection is composed of: Citation Indexes, Science Citation Index Expanded (SCI-EXPANDED) –1970-present, Social Sciences Citation Index (SSCI) –1970-present, Arts and Humanities Citation Index (A&HCI) –1975-present, Conference Proceedings Citation Index- Science (CPCI-S) –1990-present, Conference Proceedings Citation Index- Social Science & Humanities (CPCI-SSH) –1990-present, Book Citation Index– Science (BKCI-S) –2009-present, Book Citation Index– Social Sciences & Humanities (BKCI-SSH) –2009-present, Emerging Sources Citation Index (ESCI) –2015-present, Chemical Indexes, Current Chemical Reactions (CCR-EXPANDED) –2009-present (Includes Institut National de la Propriete Industrielle structure data back to 1840), Index Chemicus (IC) –2009-present.

The resultant dataset contained a total of 21,667 records for VR and 9,944 records for AR. The bibliographic record contained various fields, such as author, title, abstract, and all of the references (needed for the citation analysis). The research tool to visualize the networks was Cite space v.4.0.R5 SE (32 bit) ( Chen, 2006 ) under Java Runtime v.8 update 91 (build 1.8.0_91-b15). Statistical analyses were conducted using Stata MP-Parallel Edition, Release 14.0, StataCorp LP. Additional information can be found in Supplementary Data Sheet 1 .

The betweenness centrality of a node in a network measures the extent to which the node is part of paths that connect an arbitrary pair of nodes in the network ( Freeman, 1977 ; Brandes, 2001 ; Chen, 2006 ).

Structural metrics include betweenness centrality, modularity, and silhouette. Temporal and hybrid metrics include citation burstness and novelty. All the algorithms are detailed ( Chen et al., 2010 ).

The analysis of the literature on VR shows a complex panorama. At first sight, according to the document-type statistics from the Web of Science (WoS), proceedings papers were used extensively as outcomes of research, comprising almost 48% of the total (10,392 proceedings), with a similar number of articles on the subject amounting to about 47% of the total of 10, 199 articles. However, if we consider only the last 5 years (7,755 articles representing about 36% of the total), the situation changes with about 57% for articles (4,445) and about 33% for proceedings (2,578). Thus, it is clear that VR field has changed in areas other than at the technological level.

About the subject category, nodes and edges are computed as co-occurring subject categories from the Web of Science “Category” field in all the articles.

According to the subject category statistics from the WoS, computer science is the leading category, followed by engineering, and, together, they account for 15,341 articles, which make up about 71% of the total production. However, if we consider just the last 5 years, these categories reach only about 55%, with a total of 4,284 articles (Table 1 and Figure 1 ).

www.frontiersin.org

TABLE 1. Category statistics from the WoS for the entire period and the last 5 years.

www.frontiersin.org

FIGURE 1. Category from the WoS: network for the last 5 years.

The evidence is very interesting since it highlights that VR is doing very well as new technology with huge interest in hardware and software components. However, with respect to the past, we are witnessing increasing numbers of applications, especially in the medical area. In particular, note its inclusion in the top 10 list of rehabilitation and clinical neurology categories (about 10% of the total production in the last 5 years). It also is interesting that neuroscience and neurology, considered together, have shown an increase from about 12% to about 18.6% over the last 5 years. However, historic areas, such as automation and control systems, imaging science and photographic technology, and robotics, which had accounted for about 14.5% of the total articles ever produced were not even in the top 10 for the last 5 years, with each one accounting for less than 4%.

About the countries, nodes and edges are computed as networks of co-authors countries. Multiple occurrency of a country in the same paper are counted once.

The countries that were very involved in VR research have published for about 47% of the total (10,200 articles altogether). Of the 10,200 articles, the United States, China, England, and Germany published 4921, 2384, 1497, and 1398, respectively. The situation remains the same if we look at the articles published over the last 5 years. However, VR contributions also came from all over the globe, with Japan, Canada, Italy, France, Spain, South Korea, and Netherlands taking positions of prominence, as shown in Figure 2 .

www.frontiersin.org

FIGURE 2. Country network (node dimension represents centrality).

Network analysis was conducted to calculate and to represent the centrality index ( Freeman, 1977 ; Brandes, 2001 ), i.e., the dimension of the node in Figure 2 . The top-ranked country, with a centrality index of 0.26, was the United States (2011), and England was second, with a centrality index of 0.25. The third, fourth, and fifth countries were Germany, Italy, and Australia, with centrality indices of 0.15, 0.15, and 0.14, respectively.

About the Institutions, nodes and edges are computed as networks of co-authors Institutions (Figure 3 ).

www.frontiersin.org

FIGURE 3. Network of institutions: the dimensions of the nodes represent centrality.

The top-level institutions in VR were in the United States, where three universities were ranked as the top three in the world for published articles; these universities were the University of Illinois (159), the University of South California (147), and the University of Washington (146). The United States also had the eighth-ranked university, which was Iowa State University (116). The second country in the ranking was Canada, with the University of Toronto, which was ranked fifth with 125 articles and McGill University, ranked 10 th with 103 articles.

Other countries in the top-ten list were Netherlands, with the Delft University of Technology ranked fourth with 129 articles; Italy, with IRCCS Istituto Auxologico Italiano, ranked sixth (with the same number of publication of the institution ranked fifth) with 125 published articles; England, which was ranked seventh with 125 articles from the University of London’s Imperial College of Science, Technology, and Medicine; and China with 104 publications, with the Chinese Academy of Science, ranked ninth. Italy’s Istituto Auxologico Italiano, which was ranked fifth, was the only non-university institution ranked in the top-10 list for VR research (Figure 3 ).

About the Journals, nodes, and edges are computed as journal co-citation networks among each journals in the corresponding field.

The top-ranked Journals for citations in VR are Presence: Teleoperators & Virtual Environments with 2689 citations and CyberPsychology & Behavior (Cyberpsychol BEHAV) with 1884 citations; however, looking at the last 5 years, the former had increased the citations, but the latter had a far more significant increase, from about 70% to about 90%, i.e., an increase from 1029 to 1147.

Following the top two journals, IEEE Computer Graphics and Applications ( IEEE Comput Graph) and Advanced Health Telematics and Telemedicine ( St HEAL T) were both left out of the top-10 list based on the last 5 years. The data for the last 5 years also resulted in the inclusion of Experimental Brain Research ( Exp BRAIN RES) (625 citations), Archives of Physical Medicine and Rehabilitation ( Arch PHYS MED REHAB) (622 citations), and Plos ONE (619 citations) in the top-10 list of three journals, which highlighted the categories of rehabilitation and clinical neurology and neuroscience and neurology. Journal co-citation analysis is reported in Figure 4 , which clearly shows four distinct clusters.

www.frontiersin.org

FIGURE 4. Co-citation network of journals: the dimensions of the nodes represent centrality. Full list of official abbreviations of WoS journals can be found here: https://images.webofknowledge.com/images/help/WOS/A_abrvjt.html .

Network analysis was conducted to calculate and to represent the centrality index, i.e., the dimensions of the nodes in Figure 4 . The top-ranked item by centrality was Cyberpsychol BEHAV, with a centrality index of 0.29. The second-ranked item was Arch PHYS MED REHAB, with a centrality index of 0.23. The third was Behaviour Research and Therapy (Behav RES THER), with a centrality index of 0.15. The fourth was BRAIN, with a centrality index of 0.14. The fifth was Exp BRAIN RES, with a centrality index of 0.11.

Who’s Who in VR Research

Authors are the heart and brain of research, and their roles in a field are to define the past, present, and future of disciplines and to make significant breakthroughs to make new ideas arise (Figure 5 ).

www.frontiersin.org

FIGURE 5. Network of authors’ numbers of publications: the dimensions of the nodes represent the centrality index, and the dimensions of the characters represent the author’s rank.

Virtual reality research is very young and changing with time, but the top-10 authors in this field have made fundamentally significant contributions as pioneers in VR and taking it beyond a mere technological development. The purpose of the following highlights is not to rank researchers; rather, the purpose is to identify the most active researchers in order to understand where the field is going and how they plan for it to get there.

The top-ranked author is Riva G, with 180 publications. The second-ranked author is Rizzo A, with 101 publications. The third is Darzi A, with 97 publications. The forth is Aggarwal R, with 94 publications. The six authors following these three are Slater M, Alcaniz M, Botella C, Wiederhold BK, Kim SI, and Gutierrez-Maldonado J with 90, 90, 85, 75, 59, and 54 publications, respectively (Figure 6 ).

www.frontiersin.org

FIGURE 6. Authors’ co-citation network: the dimensions of the nodes represent centrality index, and the dimensions of the characters represent the author’s rank. The 10 authors that appear on the top-10 list are considered to be the pioneers of VR research.

Considering the last 5 years, the situation remains similar, with three new entries in the top-10 list, i.e., Muhlberger A, Cipresso P, and Ahmed K ranked 7th, 8th, and 10th, respectively.

The authors’ publications number network shows the most active authors in VR research. Another relevant analysis for our focus on VR research is to identify the most cited authors in the field.

For this purpose, the authors’ co-citation analysis highlights the authors in term of their impact on the literature considering the entire time span of the field ( White and Griffith, 1981 ; González-Teruel et al., 2015 ; Bu et al., 2016 ). The idea is to focus on the dynamic nature of the community of authors who contribute to the research.

Normally, authors with higher numbers of citations tend to be the scholars who drive the fundamental research and who make the most meaningful impacts on the evolution and development of the field. In the following, we identified the most-cited pioneers in the field of VR Research.

The top-ranked author by citation count is Gallagher (2001), with 694 citations. Second is Seymour (2004), with 668 citations. Third is Slater (1999), with 649 citations. Fourth is Grantcharov (2003), with 563 citations. Fifth is Riva (1999), with 546 citations. Sixth is Aggarwal (2006), with 505 citations. Seventh is Satava (1994), with 477 citations. Eighth is Witmer (2002), with 454 citations. Ninth is Rothbaum (1996), with 448 citations. Tenth is Cruz-neira (1995), with 416 citations.

Citation Network and Cluster Analyses for VR

Another analysis that can be used is the analysis of document co-citation, which allows us to focus on the highly-cited documents that generally are also the most influential in the domain ( Small, 1973 ; González-Teruel et al., 2015 ; Orosz et al., 2016 ).

The top-ranked article by citation counts is Seymour (2002) in Cluster #0, with 317 citations. The second article is Grantcharov (2004) in Cluster #0, with 286 citations. The third is Holden (2005) in Cluster #2, with 179 citations. The 4th is Gallagher et al. (2005) in Cluster #0, with 171 citations. The 5th is Ahlberg (2007) in Cluster #0, with 142 citations. The 6th is Parsons (2008) in Cluster #4, with 136 citations. The 7th is Powers (2008) in Cluster #4, with 134 citations. The 8th is Aggarwal (2007) in Cluster #0, with 121 citations. The 9th is Reznick (2006) in Cluster #0, with 121 citations. The 10th is Munz (2004) in Cluster #0, with 117 citations.

The network of document co-citations is visually complex (Figure 7 ) because it includes 1000s of articles and the links among them. However, this analysis is very important because can be used to identify the possible conglomerate of knowledge in the area, and this is essential for a deep understanding of the area. Thus, for this purpose, a cluster analysis was conducted ( Chen et al., 2010 ; González-Teruel et al., 2015 ; Klavans and Boyack, 2015 ). Figure 8 shows the clusters, which are identified with the two algorithms in Table 2 .

www.frontiersin.org

FIGURE 7. Network of document co-citations: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank, and the numbers represent the strengths of the links. It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past VR research to the current research.

www.frontiersin.org

FIGURE 8. Document co-citation network by cluster: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing reports the name of the cluster with a short description that was produced with the mutual information algorithm; the clusters are identified with colored polygons.

www.frontiersin.org

TABLE 2. Cluster ID and silhouettes as identified with two algorithms ( Chen et al., 2010 ).

The identified clusters highlight clear parts of the literature of VR research, making clear and visible the interdisciplinary nature of this field. However, the dynamics to identify the past, present, and future of VR research cannot be clear yet. We analysed the relationships between these clusters and the temporal dimensions of each article. The results are synthesized in Figure 9 . It is clear that cluster #0 (laparoscopic skill), cluster #2 (gaming and rehabilitation), cluster #4 (therapy), and cluster #14 (surgery) are the most popular areas of VR research. (See Figure 9 and Table 2 to identify the clusters.) From Figure 9 , it also is possible to identify the first phase of laparoscopic skill (cluster #6) and therapy (cluster #7). More generally, it is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past VR research to the current research.

www.frontiersin.org

FIGURE 9. Network of document co-citation: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing on the right hand side reports the number of the cluster, such as in Table 2 , with a short description that was extracted accordingly.

We were able to identify the top 486 references that had the most citations by using burst citations algorithm. Citation burst is an indicator of a most active area of research. Citation burst is a detection of a burst event, which can last for multiple years as well as a single year. A citation burst provides evidence that a particular publication is associated with a surge of citations. The burst detection was based on Kleinberg’s algorithm ( Kleinberg, 2002 , 2003 ). The top-ranked document by bursts is Seymour (2002) in Cluster #0, with bursts of 88.93. The second is Grantcharov (2004) in Cluster #0, with bursts of 51.40. The third is Saposnik (2010) in Cluster #2, with bursts of 40.84. The fourth is Rothbaum (1995) in Cluster #7, with bursts of 38.94. The fifth is Holden (2005) in Cluster #2, with bursts of 37.52. The sixth is Scott (2000) in Cluster #0, with bursts of 33.39. The seventh is Saposnik (2011) in Cluster #2, with bursts of 33.33. The eighth is Burdea et al. (1996) in Cluster #3, with bursts of 32.42. The ninth is Burdea and Coiffet (2003) in Cluster #22, with bursts of 31.30. The 10th is Taffinder (1998) in Cluster #6, with bursts of 30.96 (Table 3 ).

www.frontiersin.org

TABLE 3. Cluster ID and references of burst article.

Citation Network and Cluster Analyses for AR

Looking at Augmented Reality scenario, the top ranked item by citation counts is Azuma (1997) in Cluster #0, with citation counts of 231. The second one is Azuma et al. (2001) in Cluster #0, with citation counts of 220. The third is Van Krevelen (2010) in Cluster #5, with citation counts of 207. The 4th is Lowe (2004) in Cluster #1, with citation counts of 157. The 5th is Wu (2013) in Cluster #4, with citation counts of 144. The 6th is Dunleavy (2009) in Cluster #4, with citation counts of 122. The 7th is Zhou (2008) in Cluster #5, with citation counts of 118. The 8th is Bay (2008) in Cluster #1, with citation counts of 117. The 9th is Newcombe (2011) in Cluster #1, with citation counts of 109. The 10th is Carmigniani et al. (2011) in Cluster #5, with citation counts of 104.

The network of document co-citations is visually complex (Figure 10 ) because it includes 1000s of articles and the links among them. However, this analysis is very important because can be used to identify the possible conglomerate of knowledge in the area, and this is essential for a deep understanding of the area. Thus, for this purpose, a cluster analysis was conducted ( Chen et al., 2010 ; González-Teruel et al., 2015 ; Klavans and Boyack, 2015 ). Figure 11 shows the clusters, which are identified with the two algorithms in Table 3 .

www.frontiersin.org

FIGURE 10. Network of document co-citations: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank, and the numbers represent the strengths of the links. It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past AR research to the current research.

www.frontiersin.org

FIGURE 11. Document co-citation network by cluster: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing reports the name of the cluster with a short description that was produced with the mutual information algorithm; the clusters are identified with colored polygons.

The identified clusters highlight clear parts of the literature of AR research, making clear and visible the interdisciplinary nature of this field. However, the dynamics to identify the past, present, and future of AR research cannot be clear yet. We analysed the relationships between these clusters and the temporal dimensions of each article. The results are synthesized in Figure 12 . It is clear that cluster #1 (tracking), cluster #4 (education), and cluster #5 (virtual city environment) are the current areas of AR research. (See Figure 12 and Table 3 to identify the clusters.) It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past AR research to the current research.

www.frontiersin.org

FIGURE 12. Network of document co-citation: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing on the right hand side reports the number of the cluster, such as in Table 2 , with a short description that was extracted accordingly.

We were able to identify the top 394 references that had the most citations by using burst citations algorithm. Citation burst is an indicator of a most active area of research. Citation burst is a detection of a burst event, which can last for multiple years as well as a single year. A citation burst provides evidence that a particular publication is associated with a surge of citations. The burst detection was based on Kleinberg’s algorithm ( Kleinberg, 2002 , 2003 ). The top ranked document by bursts is Azuma (1997) in Cluster #0, with bursts of 101.64. The second one is Azuma et al. (2001) in Cluster #0, with bursts of 84.23. The third is Lowe (2004) in Cluster #1, with bursts of 64.07. The 4th is Van Krevelen (2010) in Cluster #5, with bursts of 50.99. The 5th is Wu (2013) in Cluster #4, with bursts of 47.23. The 6th is Hartley (2000) in Cluster #0, with bursts of 37.71. The 7th is Dunleavy (2009) in Cluster #4, with bursts of 33.22. The 8th is Kato (1999) in Cluster #0, with bursts of 32.16. The 9th is Newcombe (2011) in Cluster #1, with bursts of 29.72. The 10th is Feiner (1993) in Cluster #8, with bursts of 29.46 (Table 4 ).

www.frontiersin.org

TABLE 4. Cluster ID and silhouettes as identified with two algorithms ( Chen et al., 2010 ).

Our findings have profound implications for two reasons. At first the present work highlighted the evolution and development of VR and AR research and provided a clear perspective based on solid data and computational analyses. Secondly our findings on VR made it profoundly clear that the clinical dimension is one of the most investigated ever and seems to increase in quantitative and qualitative aspects, but also include technological development and article in computer science, engineer, and allied sciences.

Figure 9 clarifies the past, present, and future of VR research. The outset of VR research brought a clearly-identifiable development in interfaces for children and medicine, routine use and behavioral-assessment, special effects, systems perspectives, and tutorials. This pioneering era evolved in the period that we can identify as the development era, because it was the period in which VR was used in experiments associated with new technological impulses. Not surprisingly, this was exactly concomitant with the new economy era in which significant investments were made in information technology, and it also was the era of the so-called ‘dot-com bubble’ in the late 1990s. The confluence of pioneering techniques into ergonomic studies within this development era was used to develop the first effective clinical systems for surgery, telemedicine, human spatial navigation, and the first phase of the development of therapy and laparoscopic skills. With the new millennium, VR research switched strongly toward what we can call the clinical-VR era, with its strong emphasis on rehabilitation, neurosurgery, and a new phase of therapy and laparoscopic skills. The number of applications and articles that have been published in the last 5 years are in line with the new technological development that we are experiencing at the hardware level, for example, with so many new, HMDs, and at the software level with an increasing number of independent programmers and VR communities.

Finally, Figure 12 identifies clusters of the literature of AR research, making clear and visible the interdisciplinary nature of this field. The dynamics to identify the past, present, and future of AR research cannot be clear yet, but analyzing the relationships between these clusters and the temporal dimensions of each article tracking, education, and virtual city environment are the current areas of AR research. AR is a new technology that is showing its efficacy in different research fields, and providing a novel way to gather behavioral data and support learning, training, and clinical treatments.

Looking at scientific literature conducted in the last few years, it might appear that most developments in VR and AR studies have focused on clinical aspects. However, the reality is more complex; thus, this perception should be clarified. Although researchers publish studies on the use of VR in clinical settings, each study depends on the technologies available. Industrial development in VR and AR changed a lot in the last 10 years. In the past, the development involved mainly hardware solutions while nowadays, the main efforts pertain to the software when developing virtual solutions. Hardware became a commodity that is often available at low cost. On the other hand, software needs to be customized each time, per each experiment, and this requires huge efforts in term of development. Researchers in AR and VR today need to be able to adapt software in their labs.

Virtual reality and AR developments in this new clinical era rely on computer science and vice versa. The future of VR and AR is becoming more technological than before, and each day, new solutions and products are coming to the market. Both from software and hardware perspectives, the future of AR and VR depends on huge innovations in all fields. The gap between the past and the future of AR and VR research is about the “realism” that was the key aspect in the past versus the “interaction” that is the key aspect now. First 30 years of VR and AR consisted of a continuous research on better resolution and improved perception. Now, researchers already achieved a great resolution and need to focus on making the VR as realistic as possible, which is not simple. In fact, a real experience implies a realistic interaction and not just great resolution. Interactions can be improved in infinite ways through new developments at hardware and software levels.

Interaction in AR and VR is going to be “embodied,” with implication for neuroscientists that are thinking about new solutions to be implemented into the current systems ( Blanke et al., 2015 ; Riva, 2018 ; Riva et al., 2018 ). For example, the use of hands with contactless device (i.e., without gloves) makes the interaction in virtual environments more natural. The Leap Motion device 1 allows one to use of hands in VR without the use of gloves or markers. This simple and low-cost device allows the VR users to interact with virtual objects and related environments in a naturalistic way. When technology is able to be transparent, users can experience increased sense of being in the virtual environments (the so-called sense of presence).

Other forms of interactions are possible and have been developing continuously. For example, tactile and haptic device able to provide a continuous feedback to the users, intensifying their experience also by adding components, such as the feeling of touch and the physical weight of virtual objects, by using force feedback. Another technology available at low cost that facilitates interaction is the motion tracking system, such as Microsoft Kinect, for example. Such technology allows one to track the users’ bodies, allowing them to interact with the virtual environments using body movements, gestures, and interactions. Most HMDs use an embedded system to track HMD position and rotation as well as controllers that are generally placed into the user’s hands. This tracking allows a great degree of interaction and improves the overall virtual experience.

A final emerging approach is the use of digital technologies to simulate not only the external world but also the internal bodily signals ( Azevedo et al., 2017 ; Riva et al., 2017 ): interoception, proprioception and vestibular input. For example, Riva et al. (2017) recently introduced the concept of “sonoception” ( www.sonoception.com ), a novel non-invasive technological paradigm based on wearable acoustic and vibrotactile transducers able to alter internal bodily signals. This approach allowed the development of an interoceptive stimulator that is both able to assess interoceptive time perception in clinical patients ( Di Lernia et al., 2018b ) and to enhance heart rate variability (the short-term vagally mediated component—rMSSD) through the modulation of the subjects’ parasympathetic system ( Di Lernia et al., 2018a ).

In this scenario, it is clear that the future of VR and AR research is not just in clinical applications, although the implications for the patients are huge. The continuous development of VR and AR technologies is the result of research in computer science, engineering, and allied sciences. The reasons for which from our analyses emerged a “clinical era” are threefold. First, all clinical research on VR and AR includes also technological developments, and new technological discoveries are being published in clinical or technological journals but with clinical samples as main subject. As noted in our research, main journals that publish numerous articles on technological developments tested with both healthy and patients include Presence: Teleoperators & Virtual Environments, Cyberpsychology & Behavior (Cyberpsychol BEHAV), and IEEE Computer Graphics and Applications (IEEE Comput Graph). It is clear that researchers in psychology, neuroscience, medicine, and behavioral sciences in general have been investigating whether the technological developments of VR and AR are effective for users, indicating that clinical behavioral research has been incorporating large parts of computer science and engineering. A second aspect to consider is the industrial development. In fact, once a new technology is envisioned and created it goes for a patent application. Once the patent is sent for registration the new technology may be made available for the market, and eventually for journal submission and publication. Moreover, most VR and AR research that that proposes the development of a technology moves directly from the presenting prototype to receiving the patent and introducing it to the market without publishing the findings in scientific paper. Hence, it is clear that if a new technology has been developed for industrial market or consumer, but not for clinical purpose, the research conducted to develop such technology may never be published in a scientific paper. Although our manuscript considered published researches, we have to acknowledge the existence of several researches that have not been published at all. The third reason for which our analyses highlighted a “clinical era” is that several articles on VR and AR have been considered within the Web of Knowledge database, that is our source of references. In this article, we referred to “research” as the one in the database considered. Of course, this is a limitation of our study, since there are several other databases that are of big value in the scientific community, such as IEEE Xplore Digital Library, ACM Digital Library, and many others. Generally, the most important articles in journals published in these databases are also included in the Web of Knowledge database; hence, we are convinced that our study considered the top-level publications in computer science or engineering. Accordingly, we believe that this limitation can be overcome by considering the large number of articles referenced in our research.

Considering all these aspects, it is clear that clinical applications, behavioral aspects, and technological developments in VR and AR research are parts of a more complex situation compared to the old platforms used before the huge diffusion of HMD and solutions. We think that this work might provide a clearer vision for stakeholders, providing evidence of the current research frontiers and the challenges that are expected in the future, highlighting all the connections and implications of the research in several fields, such as clinical, behavioral, industrial, entertainment, educational, and many others.

Author Contributions

PC and GR conceived the idea. PC made data extraction and the computational analyses and wrote the first draft of the article. IG revised the introduction adding important information for the article. PC, IG, MR, and GR revised the article and approved the last version of the article after important input to the article rationale.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The reviewer GC declared a shared affiliation, with no collaboration, with the authors PC and GR to the handling Editor at the time of the review.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02086/full#supplementary-material

  • ^ https://www.leapmotion.com/

Akçayır, M., and Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20, 1–11. doi: 10.1016/j.edurev.2016.11.002

CrossRef Full Text | Google Scholar

Alexander, T., Westhoven, M., and Conradi, J. (2017). “Virtual environments for competency-oriented education and training,” in Advances in Human Factors, Business Management, Training and Education , (Berlin: Springer International Publishing), 23–29. doi: 10.1007/978-3-319-42070-7_3

Andersen, S. M., and Thorpe, J. S. (2009). An if–thEN theory of personality: significant others and the relational self. J. Res. Pers. 43, 163–170. doi: 10.1016/j.jrp.2008.12.040

Azevedo, R. T., Bennett, N., Bilicki, A., Hooper, J., Markopoulou, F., and Tsakiris, M. (2017). The calming effect of a new wearable device during the anticipation of public speech. Sci. Rep. 7:2285. doi: 10.1038/s41598-017-02274-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., and MacIntyre, B. (2001). Recent advances in augmented reality. IEEE Comp. Graph. Appl. 21, 34–47. doi: 10.1109/38.963459

Bacca, J., Baldiris, S., Fabregat, R., and Graf, S. (2014). Augmented reality trends in education: a systematic review of research and applications. J. Educ. Technol. Soc. 17, 133.

Google Scholar

Bailenson, J. N., Yee, N., Merget, D., and Schroeder, R. (2006). The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence 15, 359–372. doi: 10.1162/pres.15.4.359

Baños, R. M., Botella, C., Garcia-Palacios, A., Villa, H., Perpiñá, C., and Alcaniz, M. (2000). Presence and reality judgment in virtual environments: a unitary construct? Cyberpsychol. Behav. 3, 327–335. doi: 10.1089/10949310050078760

Baños, R., Botella, C., García-Palacios, A., Villa, H., Perpiñá, C., and Gallardo, M. (2009). Psychological variables and reality judgment in virtual environments: the roles of absorption and dissociation. Cyberpsychol. Behav. 2, 143–148. doi: 10.1089/cpb.1999.2.143

Baus, O., and Bouchard, S. (2014). Moving from virtual reality exposure-based therapy to augmented reality exposure-based therapy: a review. Front. Hum. Neurosci. 8:112. doi: 10.3389/fnhum.2014.00112

Biocca, F. (1997). The cyborg’s dilemma: progressive embodiment in virtual environments. J. Comput. Mediat. Commun. 3. doi: 10.1111/j.1083-6101.1997

Biocca, F., Harms, C., and Gregg, J. (2001). “The networked minds measure of social presence: pilot test of the factor structure and concurrent validity,” in 4th Annual International Workshop on Presence , Philadelphia, PA, 1–9.

Blanke, O., Slater, M., and Serino, A. (2015). Behavioral, neural, and computational principles of bodily self-consciousness. Neuron 88, 145–166. doi: 10.1016/j.neuron.2015.09.029

Bohil, C. J., Alicea, B., and Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12:752. doi: 10.1038/nrn3122

Borrego, A., Latorre, J., Llorens, R., Alcañiz, M., and Noé, E. (2016). Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters. J. Neuroeng. Rehabil. 13:68. doi: 10.1186/s12984-016-0174-171

Botella, C., Bretón-López, J., Quero, S., Baños, R. M., and García-Palacios, A. (2010). Treating cockroach phobia with augmented reality. Behav. Ther. 41, 401–413. doi: 10.1016/j.beth.2009.07.002

Botella, C., Fernández-Álvarez, J., Guillén, V., García-Palacios, A., and Baños, R. (2017). Recent progress in virtual reality exposure therapy for phobias: a systematic review. Curr. Psychiatry Rep. 19:42. doi: 10.1007/s11920-017-0788-4

Botella, C. M., Juan, M. C., Baños, R. M., Alcañiz, M., Guillén, V., and Rey, B. (2005). Mixing realities? An application of augmented reality for the treatment of cockroach phobia. Cyberpsychol. Behav. 8, 162–171. doi: 10.1089/cpb.2005.8.162

Brandes, U. (2001). A faster algorithm for betweenness centrality. J. Math. Sociol. 25, 163–177. doi: 10.1080/0022250X.2001.9990249

Bretón-López, J., Quero, S., Botella, C., García-Palacios, A., Baños, R. M., and Alcañiz, M. (2010). An augmented reality system validation for the treatment of cockroach phobia. Cyberpsychol. Behav. Soc. Netw. 13, 705–710. doi: 10.1089/cyber.2009.0170

Brown, A., and Green, T. (2016). Virtual reality: low-cost tools and resources for the classroom. TechTrends 60, 517–519. doi: 10.1007/s11528-016-0102-z

Bu, Y., Liu, T. Y., and Huang, W. B. (2016). MACA: a modified author co-citation analysis method combined with general descriptive metadata of citations. Scientometrics 108, 143–166. doi: 10.1007/s11192-016-1959-5

Burdea, G., Richard, P., and Coiffet, P. (1996). Multimodal virtual reality: input-output devices, system integration, and human factors. Int. J. Hum. Compu. Interact. 8, 5–24. doi: 10.1080/10447319609526138

Burdea, G. C., and Coiffet, P. (2003). Virtual Reality Technology , Vol. 1, Hoboken, NJ: John Wiley & Sons.

Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., and Ivkovic, M. (2011). Augmented reality technologies, systems and applications. Multimed. Tools Appl. 51, 341–377. doi: 10.1007/s11042-010-0660-6

Castelvecchi, D. (2016). Low-cost headsets boost virtual reality’s lab appeal. Nature 533, 153–154. doi: 10.1038/533153a

Cathy (2011). The History of Augmented Reality. The Optical Vision Site. Available at: http://www.theopticalvisionsite.com/history-of-eyewear/the-history-of-augmented-reality/#.UelAUmeAOyA

Chen, C. (2006). CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature. J. Assoc. Inform. Sci. Technol. 57, 359–377. doi: 10.1002/asi.20317

Chen, C., Ibekwe-SanJuan, F., and Hou, J. (2010). The structure and dynamics of cocitation clusters: a multipleperspective cocitation analysis. J. Assoc. Inform. Sci. Technol. 61, 1386–1409. doi: 10.1002/jez.b.22741

Chen, Y. C., Chi, H. L., Hung, W. H., and Kang, S. C. (2011). Use of tangible and augmented reality models in engineering graphics courses. J. Prof. Issues Eng. Educ. Pract. 137, 267–276. doi: 10.1061/(ASCE)EI.1943-5541.0000078

Chicchi Giglioli, I. A., Pallavicini, F., Pedroli, E., Serino, S., and Riva, G. (2015). Augmented reality: a brand new challenge for the assessment and treatment of psychological disorders. Comput. Math. Methods Med. 2015:862942. doi: 10.1155/2015/862942

Chien, C. H., Chen, C. H., and Jeng, T. S. (2010). “An interactive augmented reality system for learning anatomy structure,” in Proceedings of the International Multiconference of Engineers and Computer Scientists , Vol. 1, (Hong Kong: International Association of Engineers), 17–19.

Choi, S., Jung, K., and Noh, S. D. (2015). Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurr. Eng. 23, 40–63. doi: 10.1177/1063293X14568814

Cipresso, P. (2015). Modeling behavior dynamics using computational psychometrics within virtual worlds. Front. Psychol. 6:1725. doi: 10.3389/fpsyg.2015.01725

Cipresso, P., and Serino, S. (2014). Virtual Reality: Technologies, Medical Applications and Challenges. Hauppauge, NY: Nova Science Publishers, Inc.

Cipresso, P., Serino, S., and Riva, G. (2016). Psychometric assessment and behavioral experiments using a free virtual reality platform and computational science. BMC Med. Inform. Decis. Mak. 16:37. doi: 10.1186/s12911-016-0276-5

Cruz-Neira, C. (1993). “Virtual reality overview,” in SIGGRAPH 93 Course Notes 21st International Conference on Computer Graphics and Interactive Techniques, Orange County Convention Center , Orlando, FL.

De Buck, S., Maes, F., Ector, J., Bogaert, J., Dymarkowski, S., Heidbuchel, H., et al. (2005). An augmented reality system for patient-specific guidance of cardiac catheter ablation procedures. IEEE Trans. Med. Imaging 24, 1512–1524. doi: 10.1109/TMI.2005.857661

Di Lernia, D., Cipresso, P., Pedroli, E., and Riva, G. (2018a). Toward an embodied medicine: a portable device with programmable interoceptive stimulation for heart rate variability enhancement. Sensors (Basel) 18:2469. doi: 10.3390/s18082469

Di Lernia, D., Serino, S., Pezzulo, G., Pedroli, E., Cipresso, P., and Riva, G. (2018b). Feel the time. Time perception as a function of interoceptive processing. Front. Hum. Neurosci. 12:74. doi: 10.3389/fnhum.2018.00074

Di Serio,Á., Ibáñez, M. B., and Kloos, C. D. (2013). Impact of an augmented reality system on students’ motivation for a visual art course. Comput. Educ. 68, 586–596. doi: 10.1016/j.compedu.2012.03.002

Ebert, C. (2015). Looking into the future. IEEE Softw. 32, 92–97. doi: 10.1109/MS.2015.142

Englund, C., Olofsson, A. D., and Price, L. (2017). Teaching with technology in higher education: understanding conceptual change and development in practice. High. Educ. Res. Dev. 36, 73–87. doi: 10.1080/07294360.2016.1171300

Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., and Pappas, G. (2008). Comparison of pubmed, scopus, web of science, and Google scholar: strengths and weaknesses. FASEB J. 22, 338–342. doi: 10.1096/fj.07-9492LSF

Feiner, S., MacIntyre, B., Hollerer, T., and Webster, A. (1997). “A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment,” in Digest of Papers. First International Symposium on Wearable Computers , (Cambridge, MA: IEEE), 74–81. doi: 10.1109/ISWC.1997.629922

Freeman, D., Reeve, S., Robinson, A., Ehlers, A., Clark, D., Spanlang, B., et al. (2017). Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychol. Med. 47, 2393–2400. doi: 10.1017/S003329171700040X

Freeman, L. C. (1977). A set of measures of centrality based on betweenness. Sociometry 40, 35–41. doi: 10.2307/3033543

Fuchs, H., and Bishop, G. (1992). Research Directions in Virtual Environments. Chapel Hill, NC: University of North Carolina at Chapel Hill.

Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G., et al. (2005). Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann. Surg. 241:364. doi: 10.1097/01.sla.0000151982.85062.80

Gigante, M. A. (1993). Virtual reality: definitions, history and applications. Virtual Real. Syst. 3–14. doi: 10.1016/B978-0-12-227748-1.50009-3

González-Teruel, A., González-Alcaide, G., Barrios, M., and Abad-García, M. F. (2015). Mapping recent information behavior research: an analysis of co-authorship and co-citation networks. Scientometrics 103, 687–705. doi: 10.1007/s11192-015-1548-z

Heeter, C. (1992). Being there: the subjective experience of presence. Presence 1, 262–271. doi: 10.1162/pres.1992.1.2.262

Heeter, C. (2000). Interactivity in the context of designed experiences. J. Interact. Adv. 1, 3–14. doi: 10.1080/15252019.2000.10722040

Heilig, M. (1962). Sensorama simulator. U.S. Patent No - 3, 870. Virginia: United States Patent and Trade Office.

Ibáñez, M. B., Di Serio,Á., Villarán, D., and Kloos, C. D. (2014). Experimenting with electromagnetism using augmented reality: impact on flow student experience and educational effectiveness. Comput. Educ. 71, 1–13. doi: 10.1016/j.compedu.2013.09.004

Juan, M. C., Alcañiz, M., Calatrava, J., Zaragozá, I., Baños, R., and Botella, C. (2007). “An optical see-through augmented reality system for the treatment of phobia to small animals,” in Virtual Reality, HCII 2007 Lecture Notes in Computer Science , Vol. 4563, ed. R. Schumaker (Berlin: Springer), 651–659.

Juan, M. C., Alcaniz, M., Monserrat, C., Botella, C., Baños, R. M., and Guerrero, B. (2005). Using augmented reality to treat phobias. IEEE Comput. Graph. Appl. 25, 31–37. doi: 10.1109/MCG.2005.143

Kim, G. J. (2005). A SWOT analysis of the field of virtual reality rehabilitation and therapy. Presence 14, 119–146. doi: 10.1162/1054746053967094

Klavans, R., and Boyack, K. W. (2015). Which type of citation analysis generates the most accurate taxonomy of scientific and technical knowledge? J. Assoc. Inform. Sci. Technol. 68, 984–998. doi: 10.1002/asi.23734

Kleinberg, J. (2002). “Bursty and hierarchical structure in streams,” in Paper Presented at the Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2002; Edmonton , Alberta, NT. doi: 10.1145/775047.775061

Kleinberg, J. (2003). Bursty and hierarchical structure in streams. Data Min. Knowl. Discov. 7, 373–397. doi: 10.1023/A:1024940629314

Korolov, M. (2014). The real risks of virtual reality. Risk Manag. 61, 20–24.

Krueger, M. W., Gionfriddo, T., and Hinrichsen, K. (1985). “Videoplace—an artificial reality,” in Proceedings of the ACM SIGCHI Bulletin , Vol. 16, New York, NY: ACM, 35–40. doi: 10.1145/317456.317463

Lin, C. H., and Hsu, P. H. (2017). “Integrating procedural modelling process and immersive VR environment for architectural design education,” in MATEC Web of Conferences , Vol. 104, Les Ulis: EDP Sciences. doi: 10.1051/matecconf/201710403007

Llorens, R., Noé, E., Ferri, J., and Alcañiz, M. (2014). Virtual reality-based telerehabilitation program for balance recovery. A pilot study in hemiparetic individuals with acquired brain injury. Brain Inj. 28:169.

Lombard, M., and Ditton, T. (1997). At the heart of it all: the concept of presence. J. Comput. Mediat. Commun. 3. doi: 10.1111/j.1083-6101.1997.tb00072.x

Loomis, J. M., Blascovich, J. J., and Beall, A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behav. Res. Methods Instr. Comput. 31, 557–564. doi: 10.3758/BF03200735

Loomis, J. M., Golledge, R. G., and Klatzky, R. L. (1998). Navigation system for the blind: auditory display modes and guidance. Presence 7, 193–203. doi: 10.1162/105474698565677

Luckerson, V. (2014). Facebook Buying Oculus Virtual-Reality Company for $2 Billion. Available at: http://time.com/37842/facebook-oculus-rift

Maurugeon, G. (2011). New D’Fusion Supports iPhone4S and 3xDSMax 2012. Available at: http://www.t-immersion.com/blog/2011-12-07/augmented-reality-dfusion-iphone-3dsmax

Mazuryk, T., and Gervautz, M. (1996). Virtual Reality-History, Applications, Technology and Future. Vienna: Institute of Computer Graphics Vienna University of Technology.

Meldrum, D., Glennon, A., Herdman, S., Murray, D., and McConn-Walsh, R. (2012). Virtual reality rehabilitation of balance: assessment of the usability of the nintendo Wii ® fit plus. Disabil. Rehabil. 7, 205–210. doi: 10.3109/17483107.2011.616922

Milgram, P., and Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE Trans. Inform. Syst. 77, 1321–1329.

Minderer, M., Harvey, C. D., Donato, F., and Moser, E. I. (2016). Neuroscience: virtual reality explored. Nature 533, 324–325. doi: 10.1038/nature17899

Neri, S. G., Cardoso, J. R., Cruz, L., Lima, R. M., de Oliveira, R. J., Iversen, M. D., et al. (2017). Do virtual reality games improve mobility skills and balance measurements in community-dwelling older adults? Systematic review and meta-analysis. Clin. Rehabil. 31, 1292–1304. doi: 10.1177/0269215517694677

Nincarean, D., Alia, M. B., Halim, N. D. A., and Rahman, M. H. A. (2013). Mobile augmented reality: the potential for education. Procedia Soc. Behav. Sci. 103, 657–664. doi: 10.1016/j.sbspro.2013.10.385

Orosz, K., Farkas, I. J., and Pollner, P. (2016). Quantifying the changing role of past publications. Scientometrics 108, 829–853. doi: 10.1007/s11192-016-1971-9

Ozbek, C. S., Giesler, B., and Dillmann, R. (2004). “Jedi training: playful evaluation of head-mounted augmented reality display systems,” in Proceedings of SPIE. The International Society for Optical Engineering , Vol. 5291, eds R. A. Norwood, M. Eich, and M. G. Kuzyk (Denver, CO), 454–463.

Perry, S. (2008). Wikitude: Android App with Augmented Reality: Mind Blow-Ing. Digital Lifestyles.

Radu, I. (2012). “Why should my students use AR? A comparative review of the educational impacts of augmented-reality,” in Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on , (IEEE), 313–314. doi: 10.1109/ISMAR.2012.6402590

Radu, I. (2014). Augmented reality in education: a meta-review and cross-media analysis. Pers. Ubiquitous Comput. 18, 1533–1543. doi: 10.1007/s00779-013-0747-y

Riva, G. (2018). The neuroscience of body memory: From the self through the space to the others. Cortex 104, 241–260. doi: 10.1016/j.cortex.2017.07.013

Riva, G., Gaggioli, A., Grassi, A., Raspelli, S., Cipresso, P., Pallavicini, F., et al. (2011). NeuroVR 2-A free virtual reality platform for the assessment and treatment in behavioral health care. Stud. Health Technol. Inform. 163, 493–495.

PubMed Abstract | Google Scholar

Riva, G., Serino, S., Di Lernia, D., Pavone, E. F., and Dakanalis, A. (2017). Embodied medicine: mens sana in corpore virtuale sano. Front. Hum. Neurosci. 11:120. doi: 10.3389/fnhum.2017.00120

Riva, G., Wiederhold, B. K., and Mantovani, F. (2018). Neuroscience of virtual reality: from virtual exposure to embodied medicine. Cyberpsychol. Behav. Soc. Netw. doi: 10.1089/cyber.2017.29099.gri [Epub ahead of print].

Rosenberg, L. (1993). “The use of virtual fixtures to enhance telemanipulation with time delay,” in Proceedings of the ASME Winter Anual Meeting on Advances in Robotics, Mechatronics, and Haptic Interfaces , Vol. 49, (New Orleans, LA), 29–36.

Schmidt, M., Beck, D., Glaser, N., and Schmidt, C. (2017). “A prototype immersive, multi-user 3D virtual learning environment for individuals with autism to learn social and life skills: a virtuoso DBR update,” in International Conference on Immersive Learning , Cham: Springer, 185–188. doi: 10.1007/978-3-319-60633-0_15

Schwald, B., and De Laval, B. (2003). An augmented reality system for training and assistance to maintenance in the industrial context. J. WSCG 11.

Serino, S., Cipresso, P., Morganti, F., and Riva, G. (2014). The role of egocentric and allocentric abilities in Alzheimer’s disease: a systematic review. Ageing Res. Rev. 16, 32–44. doi: 10.1016/j.arr.2014.04.004

Skalski, P., and Tamborini, R. (2007). The role of social presence in interactive agent-based persuasion. Media Psychol. 10, 385–413. doi: 10.1080/15213260701533102

Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 3549–3557. doi: 10.1098/rstb.2009.0138

Slater, M., and Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual reality. Front. Robot. AI 3:74. doi: 10.3389/frobt.2016.00074

Small, H. (1973). Co-citation in the scientific literature: a new measure of the relationship between two documents. J. Assoc. Inform. Sci. Technol. 24, 265–269. doi: 10.1002/asi.4630240406

Song, H., Chen, F., Peng, Q., Zhang, J., and Gu, P. (2017). Improvement of user experience using virtual reality in open-architecture product design. Proc. Inst. Mech. Eng. B J. Eng. Manufact. 232.

Sundar, S. S., Xu, Q., and Bellur, S. (2010). “Designing interactivity in media interfaces: a communications perspective,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , (Boston, MA: ACM), 2247–2256. doi: 10.1145/1753326.1753666

Sutherland, I. E. (1965). The Ultimate Display. Multimedia: From Wagner to Virtual Reality. New York, NY: Norton.

Sutherland, I. E. (1968). “A head-mounted three dimensional display,” in Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part I , (ACM), 757–764. doi: 10.1145/1476589.1476686

Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., et al. (2000). “ARQuake: an outdoor/indoor augmented reality first person application,” in Digest of Papers. Fourth International Symposium on Wearable Computers , (Atlanta, GA: IEEE), 139–146. doi: 10.1109/ISWC.2000.888480

Ware, C., Arthur, K., and Booth, K. S. (1993). “Fish tank virtual reality,” in Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems , (Amsterdam: ACM), 37–42. doi: 10.1145/169059.169066

Wexelblat, A. (ed.) (2014). Virtual Reality: Applications and Explorations. Cambridge, MA: Academic Press.

White, H. D., and Griffith, B. C. (1981). Author cocitation: a literature measure of intellectual structure. J. Assoc. Inform. Sci. Technol. 32, 163–171. doi: 10.1002/asi.4630320302

Wrzesien, M., Alcañiz, M., Botella, C., Burkhardt, J. M., Bretón-López, J., Ortega, M., et al. (2013). The therapeutic lamp: treating small-animal phobias. IEEE Comput. Graph. Appl. 33, 80–86. doi: 10.1109/MCG.2013.12

Wrzesien, M., Burkhardt, J. M., Alcañiz, M., and Botella, C. (2011a). How technology influences the therapeutic process: a comparative field evaluation of augmented reality and in vivo exposure therapy for phobia of small animals. Hum. Comput. Interact. 2011, 523–540.

Wrzesien, M., Burkhardt, J. M., Alcañiz Raya, M., and Botella, C. (2011b). “Mixing psychology and HCI in evaluation of augmented reality mental health technology,” in CHI’11 Extended Abstracts on Human Factors in Computing Systems , (Vancouver, BC: ACM), 2119–2124.

Zyda, M. (2005). From visual simulation to virtual reality to games. Computer 38, 25–32. doi: 10.1109/MC.2005.297

Keywords : virtual reality, augmented reality, quantitative psychology, measurement, psychometrics, scientometrics, computational psychometrics, mathematical psychology

Citation: Cipresso P, Giglioli IAC, Raya MA and Riva G (2018) The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Front. Psychol. 9:2086. doi: 10.3389/fpsyg.2018.02086

Received: 14 December 2017; Accepted: 10 October 2018; Published: 06 November 2018.

Reviewed by:

Copyright © 2018 Cipresso, Giglioli, Raya and Riva. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Pietro Cipresso, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

share this!

April 24, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Virtual reality can motivate people to donate to refugee crises regardless of politics

by Washington State University

vr

Political conservatives who watched a documentary on Syrian refugees with a virtual reality headset had far more sympathy for the people depicted in the film than those who viewed the same film on a two-dimensional computer screen.

Higher sympathy levels among the conservatives who watched the VR version of the documentary, "Clouds over Sidra," resulted in a greater willingness to donate to the crisis, according to a study on the research published in New Media & Society .

Liberal participants in the study reported high levels of sympathy and intention to donate after watching both versions of the documentary. The Washington State University-led analysis suggests that by offering a unique and immersive experience , VR technology may have the ability to bridge the gap between different ideological perspectives and influence the attitudes of audiences to show more sympathy and generosity towards refugees. The results of the study could have implications for organizations trying to mobilize action on human suffering.

"We wanted to see if people's political views would play a role in how they responded emotionally to VR as this has not been heavily studied," said Porismita Borah, a professor in the Edward R. Murrow College of Education and lead author of the study. "We found that irrespective of political ideology , people in the VR condition felt more sympathy towards refugees and were more inclined toward donating."

For the study, Borah and colleagues from WSU, Texas Tech University and Purdue University set out to investigate the impact of VR technology on a politically diverse group of people's empathy and sympathy towards refugees. They also looked at VR technology's influence on the study participants' willingness to donate to relief organizations.

More than 200 college-aged individuals participated in two experiments, a pilot study in fall 2019 and the main study in fall 2021. In both studies, participants self-reported their political affiliation and were divided into VR and non-VR groups to watch "Clouds Over Sidra," a United Nations documentary portraying the life of a 12-year-old Syrian girl in a Jordanian refugee camp. Before and after watching the documentary, both groups were surveyed on their levels of empathy, sympathy and intention to donate to various humanitarian aid organizations.

While VR technology was found to enhance both sympathy and empathy overall toward the plight of refugees, its effects varied when political ideology entered the equation.

Notably, conservatives reported much higher increases in sympathy after experiencing VR content than they did after watching the documentary in a traditional video format. This increase in sympathy led conservatives to indicate a greater willingness to donate to relief organizations than when they watched the documentary in two dimensions on a computer screen. On the other hand, liberals who participated in the study had higher levels of sympathy toward refugees to begin with and indicated a willingness to donate after watching both versions of the video.

The researchers acknowledge that there are some limitations to their work. The study gauged people's emotional responses to only one crisis and all the participants were college-aged.

Nevertheless, the work highlights the emerging potential of VR to influence political attitudes and engagement with humanitarian issues, with implications for both theory and practice.

"Understanding how political ideology can interact with the VR experience is crucial and shows that emerging technologies might be able to interact with predispositions such as ideology," Borah said. "I think this work may have practical applications for NGOs and other organizations striving to find innovative ways to engage the public about refugee crises and other humanitarian disasters."

Co-authors include Bimbisar Irom, Yoon Joo Lee, Danielle Ka Lai Lee, Di Mu and Ron Price from WSU as well as Anastasia Vishnevskaya from Texas Tech University and Eylul Yel from Purdue University.

Journal information: New Media & Society

Provided by Washington State University

Explore further

Feedback to editors

essay on virtual reality technology

Artificial intelligence helps scientists engineer plants to fight climate change

essay on virtual reality technology

Ultrasensitive photonic crystal detects single particles down to 50 nanometers

essay on virtual reality technology

Scientists map soil RNA to fungal genomes to understand forest ecosystems

2 hours ago

essay on virtual reality technology

Researchers show it's possible to teach old magnetic cilia new tricks

essay on virtual reality technology

Mantle heat may have boosted Earth's crust 3 billion years ago

essay on virtual reality technology

Study suggests that cells possess a hidden communication system

3 hours ago

essay on virtual reality technology

Researcher finds that wood frogs evolved rapidly in response to road salts

essay on virtual reality technology

Imaging technique shows new details of peptide structures

essay on virtual reality technology

Cows' milk particles used for effective oral delivery of drugs

essay on virtual reality technology

New research confirms plastic production is directly linked to plastic pollution

Relevant physicsforums posts, interesting anecdotes in the history of physics.

6 hours ago

Cover songs versus the original track, which ones are better?

7 hours ago

Great Rhythm Sections in the 21st Century

17 hours ago

Biographies, history, personal accounts

Apr 23, 2024

History of Railroad Safety - Spotlight on current derailments

Apr 21, 2024

For WW2 buffs!

Apr 20, 2024

More from Art, Music, History, and Linguistics

Related Stories

essay on virtual reality technology

Adults are more generous in the presence of children, new research shows

May 6, 2021

essay on virtual reality technology

Motivating public engagement for at risk groups: The case of refugees

Apr 14, 2022

Images are not always worth a thousand words

Sep 13, 2017

essay on virtual reality technology

Most Syrian refugees yearn to return home, not immigrate

Sep 20, 2021

essay on virtual reality technology

Empathetic machines favored by skeptics but might creep out believers

Oct 31, 2018

essay on virtual reality technology

Refugees in the media: How the most commonly used images make viewers dehumanise them

Nov 19, 2021

Recommended for you

essay on virtual reality technology

The 'Iron Pipeline': Is Interstate 95 the connection for moving guns up and down the East Coast?

Apr 9, 2024

essay on virtual reality technology

Americans are bad at recognizing conspiracy theories when they believe they're true, says study

Apr 8, 2024

essay on virtual reality technology

Study on the psychology of blame points to promising strategies for reducing animosity within political divide

Apr 3, 2024

essay on virtual reality technology

Characterizing social networks by the company they keep

Apr 2, 2024

essay on virtual reality technology

Your emotional reaction to climate change may impact the policies you support, study finds

Mar 27, 2024

essay on virtual reality technology

Value-added tax data could help countries prepare better for crises

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: the application of augmented reality (ar) in remote work and education.

Abstract: With the rapid advancement of technology, Augmented Reality (AR) technology, known for its ability to deeply integrate virtual information with the real world, is gradually transforming traditional work modes and teaching methods. Particularly in the realms of remote work and online education, AR technology demonstrates a broad spectrum of application prospects. This paper delves into the application potential and actual effects of AR technology in remote work and education. Through a systematic literature review, this study outlines the key features, advantages, and challenges of AR technology. Based on theoretical analysis, it discusses the scientific basis and technical support that AR technology provides for enhancing remote work efficiency and promoting innovation in educational teaching models. Additionally, by designing an empirical research plan and analyzing experimental data, this article reveals the specific performance and influencing factors of AR technology in practical applications. Finally, based on the results of the experiments, this research summarizes the application value of AR technology in remote work and education, looks forward to its future development trends, and proposes forward-looking research directions and strategic suggestions, offering empirical foundation and theoretical guidance for further promoting the in-depth application of AR technology in related fields.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • Washington State University
  • Go to wsu twitter
  • Go to wsu facebook
  • Go to wsu linkedin

VR can motivate people to donate to refugee crises regardless of politics

A composite image featuring a VR headset displaying Syrian children in a refugee camp.

PULLMAN, Wash. — Political conservatives who watched a documentary on Syrian refugees with a virtual reality headset had far more sympathy for the people depicted in the film than those who viewed the same film on a two-dimensional computer screen.

Higher sympathy levels among the conservatives who watched the VR version of the documentary, “Clouds over Sidra,” resulted in a greater willingness to donate to the crisis, according to a study on the research published in New Media & Society.

Liberal participants in the study reported high levels of sympathy and intention to donate after watching both versions of the documentary. The Washington State University-led analysis suggests that by offering a unique and immersive experience, VR technology may have the ability to bridge the gap between different ideological perspectives and influence the attitudes of audiences to show more sympathy and generosity towards refugees. The results of the study could have implications for organizations trying to mobilize action on human suffering.

Porismita Borah portrait

“We wanted to see if people’s political views would play a role in how they responded emotionally to VR as this has not been heavily studied,” said Porismita Borah, a professor in the Edward R. Murrow College of Education and lead author of the study. “We found that irrespective of political ideology, people in the VR condition felt more sympathy towards refugees and were more inclined toward donating.”

For the study, Borah and colleagues from WSU, Texas Tech University and Purdue University set out to investigate the impact of VR technology on a politically diverse group of people’s empathy and sympathy towards refugees. They also looked at VR technology’s influence on the study participants’ willingness to donate to relief organizations.

More than 200 college-aged individuals participated in two experiments, a pilot study in fall 2019 and the main study in fall 2021. In both studies, participants self-reported their political affiliation and were divided into VR and non-VR groups to watch “Clouds Over Sidra,” a United Nations documentary portraying the life of a 12-year-old Syrian girl in a Jordanian refugee camp. Before and after watching the documentary, both groups were surveyed on their levels of empathy, sympathy and intention to donate to various humanitarian aid organizations.

While VR technology was found to enhance both sympathy and empathy overall toward the plight of refugees, its effects varied when political ideology entered the equation.

Notably, conservatives reported much higher increases in sympathy after experiencing VR content than they did after watching the documentary in a traditional video format. This increase in sympathy led conservatives to indicate a greater willingness to donate to relief organizations than when they watched the documentary in two dimensions on a computer screen. On the other hand, liberals who participated in the study had higher levels of sympathy toward refugees to begin with and indicated a willingness to donate after watching both versions of the video.

The researchers acknowledge that there are some limitations to their work. The study gauged people’s emotional responses to only one crisis and all the participants were college-aged.

Nevertheless, the work highlights the emerging potential of VR to influence political attitudes and engagement with humanitarian issues, with implications for both theory and practice.

“Understanding how political ideology can interact with the VR experience is crucial and shows that emerging technologies might be able to interact with predispositions such as ideology,” Borah said. “I think this work may have practical applications for NGOs and other organizations striving to find innovative ways to engage the public about refugee crises and other humanitarian disasters.”

Co-authors include Bimbisar Irom, Yoon Joo Lee, Danielle Ka Lai Lee, Di Mu and Ron Price from WSU as well as Anastasia Vishnevskaya from Texas Tech University and Eylul Yel from Purdue University.

Media Contacts

essay on virtual reality technology

Spanish, bilingual course from WSU Extension creates climate ambassadors

Recent news.

essay on virtual reality technology

Todd Butler resigns as College of Arts and Sciences dean

essay on virtual reality technology

Tri-state team releases calendar guide for more productive, sustainable pastures

essay on virtual reality technology

WSU to study effect of controversial drug on racehorses

essay on virtual reality technology

Voiland College names 2024 outstanding students

essay on virtual reality technology

Regents start search process for next WSU president

essay on virtual reality technology

Second chances: Graduate student receives NSF research fellowship

  • Newsletters

IE 11 Not Supported

How ai may reshape health care in south florida, the technology is beginning to impact how patients receive care, from the use of virtual reality to deploying facial recognition for check-in. these were among the use cases on view at the recent emerge americas conference..

A seated person in a white lab coat, with a stethoscope around their neck, works at a laptop.

PATIENT FACIAL RECOGNITION

Using virtual reality to help reduce patient anxiety, targeting alzheimer's, ai to teach, 'pushing the limits of healthcare' on the race track.

gov-footer-logo-2024.png

IMAGES

  1. Real Virtual Reality (RVR)

    essay on virtual reality technology

  2. (PDF) Virtual Reality is 'Finally Here': A Qualitative Exploration of

    essay on virtual reality technology

  3. 💌 Virtual reality words. Essay on Virtual Reality. 2022-10-31

    essay on virtual reality technology

  4. The Future of Virtual Reality Essay Example

    essay on virtual reality technology

  5. ≫ Corners of The Virtual Reality World Free Essay Sample on Samploon.com

    essay on virtual reality technology

  6. (PDF) Virtual Reality and Augmented Reality for Education

    essay on virtual reality technology

VIDEO

  1. Understanding the Technologies Augmented Reality (AR) and Virtual Reality (VR)

  2. Lonely Together: Virtual Communication

  3. Virtual Worlds Beyond Virtual Worlds

  4. Half Life: Alyx and the Simulacrum

  5. Quiq Labs

  6. What is Virtual Reality?

COMMENTS

  1. How Virtual Reality Technology Has Changed Our Lives: An Overview of

    The gathered papers and articles were then reviewed to further select representative and up-to-date evidence. ... This literature review has shown how virtual reality technology has the potential to be a greatly beneficial tool in a multitude of applications and a wide variety of fields. Current applications span different domains such as ...

  2. Virtual reality (VR)

    Show More. virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive ...

  3. Virtual Reality Essays

    Writing Tips for an Essay on Virtual Reality. When writing an essay on virtual reality, it's important to consider the following tips: Research extensively: Start by conducting thorough research on virtual reality, including its history, current applications, and future potential. This will provide you with a solid foundation for your essay.

  4. 109 Virtual Reality Essay Topics & Samples

    109 Virtual Reality Topics & Essay Examples. When writing a virtual reality essay, it is hard to find just one area to focus on. Our experts have outlined 104 titles for you to choose from. Humanity has made amazing leaps in technology over the past several years.

  5. What is virtual reality?

    What differentiates VR from an ordinary computer experience (using your PC to write an essay or play games) is the nature of the input and output. Where an ordinary computer uses things like a keyboard, mouse, or ... Pros and cons of virtual reality. Like any technology, virtual reality has both good and bad points. ...

  6. Virtual Reality Technology

    For example, users are not required to search for files on computers because with virtual reality, it is possible for them to access the files by opening the drawers containing them (Biocca 5). Although virtual reality can impact societies positively, it could also impact the society negatively. The first negative impact of this technological ...

  7. Analyzing augmented reality (AR) and virtual reality (VR) recent

    Augmented and virtual reality (AR & VR) are two of the most innovative technology advancements in the world today, and their potential for improving the education system is massive. The use of Augmented Reality (AR) and Virtual Reality (VR) in education has been on the rise in recent years and provides a wealth of opportunities to leverage ...

  8. Augmented reality and virtual reality displays: emerging ...

    With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital ...

  9. Recent advances in virtual reality and psychology: Introduction to the

    This editorial introduces the current Special Issue of Translational Issues in Psychological Science which provides novel research findings and perspectives on the use of Virtual Reality (VR) for psychological applications. The variety of topics presented in this themed issue underscore the broad applicability (and potential value) of VR technology for addressing the diverse challenges that ...

  10. Virtual Reality Technology for Wide Target Audience Essay

    Introduction. Virtual Reality devices and software are quickly gaining a foothold in the market, despite being a young technology. VR has the benefit of appealing to a very broad target audience, which contains children, teenagers, video gamers, mobile phone users, adults, people in the industry, and the others.

  11. (PDF) A Study of Virtual Reality

    Abstract. Virtual reality (VR) is a powerful and interactive technology that changes our life unlike any other. Virtual reality, which can also be termed as immersive multimedia, is the art of ...

  12. The Past, Present, and Future of Virtual and Augmented Reality Research

    Virtual Reality Concepts and Features. The concept of VR could be traced at the mid of 1960 when Ivan Sutherland in a pivotal manuscript attempted to describe VR as a window through which a user perceives the virtual world as if looked, felt, sounded real and in which the user could act realistically (Sutherland, 1965).Since that time and in accordance with the application area, several ...

  13. Virtual Reality Free Essay Examples And Topic Ideas

    Free essay examples about Virtual Reality ️ Proficient writing team ️ High-quality of every essay ️ Largest database of free samples on PapersOwl. ... Virtual Reality (VR) is a powerful technology that has the potential to cause a multitude of social and psychological problems. VR is defined as a "computer-generated display that allows ...

  14. Virtual Reality Essay

    Virtual reality is a virtual environment created by use of technology which has enabled the creation of real experiences into our brains and senses. It fills in the gaps that are presented by other modes of communication by making it possible to use technology to create real experiences in the human life.

  15. Study and Analysis of Virtual Reality and its Impact on the Current Era

    In these days, virtual reality (VR) technology is widely using in many fields and becoming the mainstream due to its features (e.g. experience, personalization and entertainment). With development, it provides a new platform to make the technology more conventional, exciting and progressively make changes in people's way of creation and life. The real-life impacts of VR and its effects on ...

  16. (PDF) VR-Research paper

    The technology in Virtual Reality: There are some people to whom VR is a specific collection o f technologies; ... To know and learn about VR from research papers and continuing i t to a new level.

  17. Virtual Reality

    Virtual reality is a computer technology that immerses a user in an imagined or replicated world (like video games, movies, or flight simulation), or simulates presence in the real world (like gliding through the canals on a gondola in Venice, or attending a Grammy Awards ceremony). The user experiences VR through a headset, sometimes in ...

  18. Virtual Reality: Ethical Challenges and Dangers

    While the potential advantages of virtual reality are limitless, there has been much debate about the ethical complexities that this new technology presents [9], [19]. Potential ethical implications of VR include physiological and cognitive impacts and behavioral and social dynamics. Identifying and managing procedures to address emerging ...

  19. Virtual Reality: The Technology of the Future

    Virtual Reality: The Technology of the Future. Topic: Technology Words: 2086 Pages: 8. Virtual reality (VR) is a technology that permits the user to maintain contact with a computer-simulated ambiance whether it is an actual or perceived one. Most of the contemporary virtual reality environments are fundamentally visual encounters, shown either ...

  20. Implementation of virtual reality in healthcare: a scoping review on

    VR is a technology that uses a headset to simulate a reality in which the user is immersed in a virtual environment, creating the impression that the user is physically present in this virtual space [1, 2]. VR offers a broad range of possibilities in which the user can interact with a virtual environment or with virtual characters.

  21. Virtual reality technology for learning detailed design in landscape

    There is much interest in employing computer technology in design professions and education. However, few attempts have been made to apply immersive visualization technology to learn design details in landscape architecture. This study aims to illuminate how virtual reality (VR) technology helps students with design details in landscape architecture. Students were given a course project to ...

  22. Virtual Reality's Main Benefits

    Thesis Statement. Virtual reality is a fast-developing technology that carries a multitude of benefits for such professional fields as healthcare, education, military, versatile training, psychology, psychiatry, and entertainment; however, the technology is currently at the stage of development and has a set of weaknesses that prevent it from ...

  23. Meta unveils new virtual reality headsets

    AYESHA RASCOE, HOST: Facebook's parent company, Meta, has a new educational product for their Quest virtual reality headset, intended to go along with third-party educational apps.

  24. The Past, Present, and Future of Virtual and Augmented Reality Research

    1 Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano, Milan, Italy; 2 Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy; 3 Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain; The recent appearance of low cost virtual reality (VR) technologies - like the Oculus Rift, the HTC ...

  25. Applied Sciences

    The notion of the smart city involves embedding Industry 4.0 technologies to improve the lives of inhabitants in urban environments. Within this context, smart city data layers (SCDLs) concern the integration of extra tiers of information for the purposes of improving communication potential. Under the Industry 4.0 technology grouping, advanced communication technologies, such as virtual ...

  26. Virtual reality can motivate people to donate to refugee crises

    For the study, Borah and colleagues from WSU, Texas Tech University and Purdue University set out to investigate the impact of VR technology on a politically diverse group of people's empathy and ...

  27. [2404.10579] The application of Augmented Reality (AR) in Remote Work

    View PDF Abstract: With the rapid advancement of technology, Augmented Reality (AR) technology, known for its ability to deeply integrate virtual information with the real world, is gradually transforming traditional work modes and teaching methods. Particularly in the realms of remote work and online education, AR technology demonstrates a broad spectrum of application prospects.

  28. Seeing is believing: Bringing virtual reality into the clinic

    We aim to translate 3D virtual models to the operating room to assist in complex oncologic surgery. The burgeoning technology making this feasible is termed augmented reality (AR). Using AR headsets will one day allow the overlay of virtual models onto patient anatomy in the operating room, updating in real-time as surgery is carried out.

  29. Virtual reality can motivate people to donate to refugee crises

    Virtual reality technology was found to enhance both sympathy and empathy overall toward the plight of refugees in a recent study led by WSU (photo composite featuring iStock images). ... Wash. — Political conservatives who watched a documentary on Syrian refugees with a virtual reality headset had far more sympathy for the people depicted in ...

  30. How AI May Reshape Health Care in South Florida

    The technology is beginning to impact how patients receive care, from the use of virtual reality to deploying facial recognition for check-in. These were among the use cases on view at the recent ...