Virtual Reality: “A system in which images that look like real objects are created by computer and can be interacted with by using special electronic equipment.”
The oxford dictionary definition of virtual reality is true from a certain perspective. However the boundaries of virtual reality are constantly being pushed with the advances that are constantly being made in computing, virtual reality and related research areas.
The work of Kevin Warwick is one example of how virtual reality is a continually developing area. Kevin Warwick is a professor of cybernetics researching into the concept of the human ‘Cyborg’. In two experiments Warwick had electronic implants put into him. The first of these implants was a radio frequency identification chip (RFID). This chip was monitored through a wireless link by a computer in his office, the computer was able to monitor his movements and knowing where he was allowed the computer to open doors and switch on lights for him accordingly. The second implant was a 100-electrode array that was connected directly to Warwick’s nervous system, through his left arm. The implant was connected through hardwire or wireless link to another computer. The implant allowed Warwick to control the movement of a robotic arm and neural control of an intelligent home environment. The data communicated with the computer in this second experiment it is hoped can be transmitted back down the connection to the arm and then translated as an emotional response in the participant. This is one example of where the lines between the real and the virtual become blurred. Should it be possible to send signals back to through the connection to the nervous system the computer could be used to create artificial emotional signals that could elicit a desired response within Warwick. In theory any number of emotional responses could be re-created, should enough information be available to do so.
Another example challenging the definition in the oxford dictionary is the development of the seeing tongue. Developed by scientist Paul Bach-y-Rita the seeing tongue allows the wearer whether blind or simply blindfolded to see the general shape of objects through their tongue. Transmitting images from a camera to a control box through a cable and then back onto a 12-by-12 gold plated electrode array, roughly the size of a desert fork, the wearer places it the device onto their tongue like a lollipop. The stimulation from the electrodes produces sensations that subjects can then interpret as visual signals. The device has clear implications for scientific research notably in the fields of neuroscience and psychology.
Both of these devices could be adapted and integrated into virtual reality systems to provide a new definition of virtual reality that takes it beyond the concept of interacting with images, but rather interacting with ‘real’ virtual objects – objects that exist virtually and could be given virtual mass, texture, and feeling, and perceived directly by the users. Special equipment would still be used, but perception would not occur ‘through’ the equipment it would seem like direct interaction with the object.
These technologies are still being developed, but other technologies that challenge the definition provided by the oxford dictionary, but fit in with more relevant ideas of virtual reality will be looked at in more detail below.
Some Virtual Reality research areas are as follows:
- Vision, 3D Graphics
- Around Sound
- Smell/Taste
- Immersive Reality
- Touch, Haptic Feedback
- Augmented Reality
Vision, 3D Graphics
Vanishing points are used to create the illusion of a 3D view. Constantly changing the vanishing point to reflect the movement of the person viewing a picture can create the illusion of navigating a 3D space. Vanishing points create perspective within a picture, creating the illusion of depth within a two-dimensional image.
The creation of this illusion using a computer is aided by the ability for the computer to be able to redraw the vanishing points and the scene as quickly and as accurately as possible as the user navigates the environment. The process allowing the computer to re-calculate and re-draw these vanishing points is called rendering. Rendering involves the re-calculation and redrawing of the vanishing points and the entire scene as mentioned, but it also involves the re-calculation of light sources, and how it would be reflected and absorbed by various objects so that the image can be given other three-dimensional qualities such as variations in shade, colour intensity, and shadows.
In terms of virtual reality the projection of the 3D image on a screen is not necessarily enough to create the illusion of a virtual environment due to issues with perception, including spatial awareness and depth perception1. Stereoscopic vision however looks to resolve these issues. Stereoscopic vision allows for look-around, walk-around and fly-through capabilities in virtual environments, although some users are still unable to pick up this perception, according to 3D photographer Boris Starosta 10% of the population are ‘stereo-blind’ and must rely on non-stereoscopic depth cues within the 2D pictures.
Software that allows us to create 3D representations of images include QuickTime VR, and Holomatix Blaze 3D. The Tate gallery insight project has adopted blaze 3D as part of its exploration into methods of representing art digitally. Blaze 3D is a similar package to Quick Time VR and can create a view which can be rotated 360 degrees around, under and above an object, the view can also be zoomed into and explored in more detail.
Also making use of 3D vision beyond gaming technologies are art museums such as the Musée du Quai Branly in Paris and the Museo Virtual de Artes el Pais in Uruguay. Both museums exist purely virtually. They use virtual architecture that can be explored using tools to interact with the environment. The Musée du Quai Branly in Paris has the additional quality of being modeled using stereoscopic techniques and can be viewed within an immersive environment and interacted with using specially designed controls. 3D virtual representations such as those found in games such as Quake are being used in many areas. One such area is to help gain an understanding of patient hallucinations in Australia. Descriptions of real psychotic events can be depicted using software as a 3D representation – An abyss could appear where the floor should be, distorted mirror images of the patient can be provided, and abusive voices can be presented simultaneously with the images on screen. Such technology is employed for cognitive behavioral therapy to teach patients how to ignore hallucinations. It is another example in many of the increasing uses of three-dimensional representations.
Around Sound – As discussed within the field of electronic music the ability to create sound using electronic means has been around for over a century. The ability to do so with a computer has been theoretically possible since its conception, and physically possible since experts began experimenting with systems back in the 50’s. Sounds can be synthesised and created using a computer, or simply re-presented through the computer speakers. Creating realistic sounds requires our ears to receive the sound at slightly different times. Technologies such as Dolby Stereo surround sound allow us to create realistic sound using multiple audio channels, speakers and a decoder. Smell/Taste – As mentioned above sight can be recreated through the stimulation of electrodes on a human tongue. Also the idea that emotions can be recorded and then manipulated using controls connected directly to the human nervous system is explored in the experiments conducted by Kevin Warwick. These experiments are only a step away from the creation of virtual smell and taste. Should emotion be re-created by sending virtual signals through the nervous system, it is a logical step that should the correct signals be discovered the same techniques could be used to create the sensation of smell and taste within the wearer of a special device. Although still a theoretical idea, the ability to record emotion and reactions to particular situations may lie in the same realm as re-creating these emotions and sensations, including possibly smell and taste.
Immersive Reality: Immersive environments create an experience for the user that makes them seem actually within a ‘virtual’ environment. Immersive reality allows the user a complete field of view in a 360-degree area around them. A head mounted display could be used to help achieve this effect or the user could experience the display within a small room dedicated to the projection of the environment. Stereoscopic vision allows some form of interaction with objects within the 3D environment by creating an interactive space that can be explored in a different way from a simple 3D projection on a display screen. Immersive reality is closest to the oxford dictionary definition: “A system in which images that look like real objects are created by computer and can be interacted with by using special electronic equipment”. The combination of advanced display techniques and realistic sound and interaction allow the user to feel completely immersed within a ‘virtual-reality’.
Haptics is described the science of applying tactile sensation to human interaction with computers. A haptic device is one that involves physical contact between the computer and the user, usually through an input/output device, such as a joystick or data gloves. Haptic feedback is the term used for the sensation created by this physical contact. A user can pick up a virtual tennis ball using a data glove. The computer senses the movement and moves the virtual ball on the display. However, because of the nature of a haptic interface, the user will feel the tennis ball in his hand through tactile sensations that the computer sends through the data glove, mimicking the feel of the tennis ball in the user’s hand2.
Augmented reality is the combination of virtual information within a real-life space. Information or images can be projected onto a screen of some form that is actually being used within a real environment that augments the environment with addition information about it. A heads-up display within an aircraft is an example of augmented reality. Additional information that the pilot requires can be projected onto the cockpit, or onto the pilots visor unobtrusively and without affecting the normal view of the pilot. The information is used to enhance the experience of flying the plane by providing information that the pilot may need such as location, or targeting information within a battle in a practical and usable way. The underlying objective is to enhance the users performance in and perception of the world. The idea is to create a system that the user cannot tell the difference between the real world and the virtual augmentation of it – the user would perceive it as a single real scene3.
Augmented reality can be used in everyday life such as a heads up display in our own cars depicting speed information and location provided by a global positioning system. An augmented reality display might eventually be developed on a display device that is similar to a pair of glasses. The user would wear the glasses and as they move the display on the glasses display would change to provide additional information about the environment. An enticing idea is the combination of an augmented reality display and technology such as content-based image retrieval for identifying objects, which is then used within an art gallery setting. A user could look at a picture or sculpture within the gallery; its likeness will be discovered through its basic shape and parameters using CBIR and information displayed about the picture on the display. It would provide a method of enhancing the experience of going around an art gallery in the same way audio tours might nowadays. However the augmented reality display or headset provides scope for more personalised and relevant information. Hybrid environments combine a number of the technologies mentioned here. A hybrid environment is a combination of the virtual and the real and can be used within a real or virtual situation. In the field of conflict simulation we looked at the hybrid environment of a flight simulator. The flight simulator discussed merges a number of technologies. The flight simulator merges an immersive environment with the use of a real cockpit and flight controls; haptic feedback is used to recreate the feeling of flying a real aircraft, such as the feeling of landing, or turbulence; computer generated sky, runway, and background is presented to the user on the view screens, and augmented reality is employed to provide additional controls and display in the same way that would be used in real aircraft such as the Eurofighter on the cockpit window. This can be described as a ‘hybrid environment’ and merges the real with the virtual along with ‘rich’ user interaction. Along with the other example mentioned – the modelling of cranial implants using a hybrid-environment it is clear the boundaries of the definition of virtual reality discussed above are being pushed. The uses of such revolutionary technology are still being discovered, but it is clear that there are uses for this technology in our everyday lives in the form of entertainment; education can be enriched by providing students with 3D interactive views of anatomy and architecture; NASA have even adopted virtual environments and hybrid environments for assembly training, hardware layout, and design evaluation of payload design. The Internet could be enhanced by such methods adding another level of reality to what some could consider a form of ‘reality’ in itself, certainly part of their everyday routine and lifestyles. The internet used for managing bank accounts, booking holidays and even socialising already could be represented in a three-dimensional immersive environment with hybrid elements to provide users or ‘participants’ with a virtual shopping mall featuring a virtual bank and desk clerk, a virtual travel agent and even the socialising element could be represented with enhanced three-dimensional interactive avatars for users in web-forums and obviously gaming situations.
References
ACM News Track: Virtual Visions. Communications of the ACM, 47(9), September 2004, pp9-10
http://cave.ncsa.uiuc.edu/about.html - accessed June 12th 2005
http://www.elpais.com.uy/muva2/ - accessed June 12th 2005
Hoffman, D.L. Novak, T. P. Venkatesh, A. Has the Internet become Indispensable? Communications of the ACM, 47(7) July 2004, pp37-42
www.holomatix.com - accessed June 12th 205
http://interactivity.ucsd.edu/projects/augMedia/perceptionVE.html - accessed June 12th 2005
Imielinska, C. Molholt, P. Incorporating 3D Virtual Anatomy into the Medical Curriculum. Communications of the ACM, 48(2), February 2005, pp49-54
www.kevinwarwick.com/ - accessed June 12th 2005
Lok, B. C. Toward the Merging of Real and Virtual Spaces. Communications of the ACM, 47(8), August 2004, pp48-53
http://news.bbc.co.uk/1/hi/england/kent/4557881.stm - accessed June 12th 2005
www.readymade.fr/ - accessed June 12th 2005
Rosenbloom, A. Interactive Immersion in 3D Computer Graphics. Communications of the ACM, 47(8), August 2004, pp28-31
www.se.rit.edu - accessed June 12th 2005
www.starosta.com/3dshowcase/istereo.html - accessed June 12th 2005
www.tate.org.uk/collections/insight - accessed June 12th 2005
UKRI Newsletter, Tom Hammons, October 2003
www.webopedia.com/TERM/R/rendering.html - accessed June 12th 2005
www.webopedia.com/TERM/R/ray_tracing.html - accessed June 12th 2005