Virtual Reality – thenu


 What is virtual reality?

Virtual reality (VR) is a technology which allows a user to interact with a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. It is an artificial environment that is created with software and presented to the user in such a way that the user suspends belief and accepts it as a real environment. On a computer, virtual reality is primarily experienced through two of the five senses: sight and sound

Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones.

Virtual reality can be divided into:

  • The simulation of a real environment for training and education.
  • The development of an imagined environment for a game or interactive story.


The concept of virtual reality has been around for decades, even though the public really only became aware of it in the early 1990s.

Mid of 1950:  Cinematographer Named Morton Heilig & Device: Sensorama

Envisioned a theatre experience that would stimulate all his audiences’ senses, drawing them in to the stories more effectively. He build a console in 1960 called the Sensorama that included a stereoscopic display, fans, odor emitters, stereo speakers and a moving chair. He also invented a head mounted television display designed to let a user watch television in 3-D. Users were passive audiences for the films, but many of Heilig’s concepts would find their way into the VR field.

In 1961: Philco Corporation engineers & Device: Headsight

Developed the first HMD in 1961, called the Headsight. The helmet included a video screen and tracking system, which the engineers linked to a closed circuit camera system. They designed the HMD for use in dangerous situations — a user could observe a real environment remotely, adjusting the camera angle by turning his head.

Bell Laboratories used a similar HMD for helicopter pilots. They linked HMDs to infrared cameras attached to the bottom of helicopters, which allowed pilots to have a clear field of view while flying in the dark.

In 1965: A Computer Scientist Named Ivan Sutherland

Envisioned what he called the “Ultimate Display.” Using this display, a person could look into a virtual world that would appear as real as the physical world the user lived in. This vision guided almost all the developments within the field of virtual reality. Sutherland’s concept included:

  • A virtual world that appears real to any observer, seen through an HMD.
  • A computer that maintains the world model in real time.
  • The ability for users to manipulate virtual objects in a realistic, intuitive way.

For years, VR technology remained out of the public eye. Almost all development focused on vehicle simulations until the 1980s.

In 1984:  Michael McGreevy & Device: HumanComputer Interface (Hci)

 Began to experiment with VR technology as a way to advance humancomputer interface (HCI) designs. HCI still plays a big role in VR research, and moreover it lead to the media picking up on the idea of VR a few years later.

In 1987: Jaron Lanier coined the term Virtual Reality in 1987.


Other sensory output from the VE system should adjust in real time as a user explores the environment.  Sensory stimulation must be consistent if a user is to feel immersed within a VE. If the VE shows a perfectly still scene, you wouldn’t expect to feel gale-force winds. Likewise, if the VE puts you in the middle of a hurricane, you wouldn’t expect to feel a gentle breeze or detect the scent of roses.

Lag time between when a user acts and when the virtual environment reflects that action is called latency. Latency usually refers to the delay between the time a user turns his head or moves his eyes and the change in the point of view, though the term can also be used for a lag in other sensory outputs. Studies with flight simulators show that humans can detect a latency of more than 50 milliseconds. When a user detects latency, it causes him to become aware of being in an artificial environment and destroys the sense of immersion.

­An immersive experience suffers if a user becomes aware of the real world around him. Truly immersive experiences make the user forget his real surroundings, effectively causing the computer to become a non entity. In order to reach the goal of true immersion, developers have to come up with input methods that are more natural for users. As long as a user is aware of the interaction device, he is not truly immersed.


  • Immersive virtual reality
  • Non immersive virtual reality
  • Semi immersive virtual reality


In a virtual reality environment, a user experiences immersion, or the feeling of being inside and a part of that world. He is also able to interact with his environment in meaningful ways. The combination of a sense of immersion and interactivity is called telepresence.

Computer scientist Jonathan Steuer defined it as “the extent to which one feels present in the mediated environment, rather than in the immediate physical environment.” In other words, an effective VR experience causes you to become unaware of your real surroundings and focus on your existence inside the virtual environment

Jonathan Steuer proposed two main components of immersion:

  • Depth of information
  • Breadth of information.

 Depth of information refers to the amount and quality of data in the signals a user receives when interacting in a virtual environment. For the user, this could refer to a display’s resolution, the complexity of the environment’s graphics, and the sophistication of the system’s audio output.

Breadth of Information as the “number of sensory dimensions simultaneously presented.” A virtual environment experience has a wide breadth of information if it stimulates all your senses. Most virtual environment experiences prioritize visual and audio components over other sensory-stimulating factors, but a growing number of scientists and engineers are looking into ways to incorporate a users’ sense of touch. Systems that give a user force feedback and touch interaction are called haptic systems.


Non-immersive systems, as the name suggests, are the least immersive implementation of VR techniques. Using the desktop system, the virtual environment is viewed through a portal or window by utilizing a standard high resolution monitor. Interaction with the virtual environment can occur by conventional means such as keyboards, mice and trackballs or may be enhanced by using 3D interaction devices.


  • A large screen monitor
  • A large screen projector system
  • Multiple television projection systems

similar to the IMAX theatres sing a wide field of view, these systems increase the feeling of immersion or presence experienced by the user Semi-immersive systems therefore provide a greater sense of presence than non-immersive systems and also a greater appreciation of scale. In addition, images can be provided that are of a far greater resolution than HMDs and this implementation provides the ability to share the virtual experience. This may have a considerable benefit in educational applications as it allows simultaneous experience of the VE which is not available with head-mounted immersive systems.


                    Immersion within a virtual environment is one thing, but for a user to feel truly involved there must also be an element of interaction. Early applications using the technology common in VE systems today allowed the user to have a relatively passive experience. Users could watch a pre-recorded film while wearing a head-mounted display (HMD). They would sit in a motion chair and watch the film as the system subjected them to various stimuli, such as blowing air on them to simulate wind. While users felt a sense of immersion, interactivity was limited to shifting their point of view by looking around. Their path was pre-determined and unalterable.

Interactivity depends on many factors. Steuer suggests that three of these factors are speed, range and mapping. Steuer defines speed as the rate that a user’s actions are incorporated into the computer model and reflected in a way the user can identify by means of senses. Range refers to how many possible outcomes could result from any particular user action. Mapping is the system’s ability to produce natural results in response to a user’s actions.

Navigation within a virtual environment is one kind of interactivity. If a user can direct his own movement within the environment, it can be called an interactive experience. Most virtual environments include other forms of interaction, since users can easily become bored after just a few minutes of exploration.

Computer Scientist Mary Whitton points out that poorly designed interaction can drastically reduce the sense of immersion, while finding ways to engage users can increase it. When a virtual environment is interesting and engaging, users are more willing to suspend disbelief and become immersed.

True interactivity also includes being able to modify the environment. A good virtual environment will respond to the user’s actions in a way that makes sense, even if it only makes sense within the realm of the virtual environment. If a virtual environment changes in outlandish and unpredictable ways, it risks disrupting the user’s sense of telepresence.



Data gloves offer a simple means of gesturing commands to the computer. Rather than punching in commands on a keyboard, which can be tricky if you’re wearing a head-mounted display or are operating the BOOM, you program the computer to change modes in response to the gestures you make with the data gloves.

Pointing upwards may mean zoom in; pointing down, zoom out. A shake of your fist may signal the computer to end the program. Some people program the computer to mimic their hand movements in the simulation; for instance, to see their hands while conducting a virtual symphony.


Wands, the simplest of the interface devices, come in all shapes and variations. Most incorporate on-off buttons to control variables in a simulation or in the display of data. Others have knobs, dials, or joy sticks. Their design and manner of response a re tailored to the application.

Most wands operate with six degrees of freedom; that is, by pointing a wand at an object, you can change its position and orientation in any of six directions: forward or backward, up or down, or left or right.


Stair steppers are an example of the limitless manifestations of interface devices. As part of a simulated battlefield terrain, engineers from an army research lab outfitted a stair stepper with sensing devices to detect the speed, direction, and intensity of a soldier’s movements in response to the battlefield scenes projected onto a head-mounted display. The stair stepper provided feedback to the soldier by making the stairs easier or more difficult to climb.



Looking like oversized motorcycle helmets, head-mounted displays are actually portable viewing screens that add depth to otherwise flat images. If you look inside the helmet you will see two lenses through which you look at a viewing screen. As a simulation begins, the computer projects two slightly different images on the screen: one presenting the object as it would be seen through your right eye, the other, through your left. These two stereo images are then fused by your brain into one 3D image.

To track your movements, a device on top of the helmet signals your head movements relative to a stationary tracking device. As you move your head forwards, backwards, or sideways, or look in a different direction, a computer continually updates the simulation to reflect your new perspective.

Because head-mounted displays block out the surrounding environment, they are favored by VR operators who want the wearers to feel absorbed in the virtual environment, such as in flight simulators. And as you might expect, these displays also are popular with the entertainment industry.

Data gloves and wands are the most common interface devices used with head-mounted displays.


The Binocular Omni Orientation Monitor, or BOOM, is similar to a head-mount except that there’s no fussing with a helmet. The BOOM’s viewing box is suspended from a two-part, rotating arm. Simply place your forehead against the BOOM’s two eyeglasses and you’re in the virtual world. To change your perspective on an image, grab the handles on the side of the viewing box and move around the image in the same way you would if it were real: Bend down to look at it from below; walk around it to see it from behind. Control buttons on the BOOM handles usually serve as the interface although you can hook up data gloves or other interface devices.


One of the newest, most “immersive” virtual environments is the CAVE (CAVE Automatic Virtual Environment).

It provides the illusion of immersion by projecting stereo images on the walls and floor of a room-sized cube. Several persons wearing lightweight stereo glasses can enter and walk freely inside the CAVE.


A variety of input devices like data gloves, joysticks, and hand-held wands allow the user to navigate through a virtual environment and to interact with virtual objects. Directional sound, tactile and force feedback devices, voice recognition and other technologies are being employed to enrich the immersive experience and to create more “sensualized” interfaces.


Three networked users at different locations (anywhere in the world) meet in the same virtual world by using a BOOM device, a CAVE system, and a Head-Mounted Display, respectively. All users see the same virtual environment from their respective points of view. Each user is presented as a virtual human (avatar) to the other participants. The users can see each other, communicated with each other, and interact with the virtual world as a team.


As virtual environments are supposed to simulate the real world, by constructing them we must

have knowledge how to “fool the user’s senses” This problem is not a trivial task

and the sufficiently good solution has not yet been found: on the one hand we must give the

user a good feeling of being immersed, and on the other hand this solution must be feasible.

  • Sight…………….. 70 %

• hearing………….. 20 %

• smell ………………5 %

• touch………………4 %

• taste ……………….1 %

Human vision provides the most of information passed to our brain and captures most of our attention. Therefore the stimulation of the visual system plays a principal role in “fooling the senses” and has become the focus of research.


 Tracking devices are intrinsic components in any VR system. These devices communicate with the system’s processing unit, telling it the orientation of a user’s point of view. In systems that allow a user to move around within a physical space, trackers detect where the user is, the direction he is moving and his speed. There are several different kinds of tracking systems used in VR systems, but all of them have a few things in common. They can detect six degrees of freedom (6-DOF) — these are the object’s position within the x, y and z coordinates of a space and the object’s orientation. Orientation includes an object’s yaw, pitch and roll.

From a user’s perspective, this means that when you wear an HMD, the view shifts as you look up, down, left and right. It also changes if you tilt your head at an angle or move your head forward or backward without changing the angle of your gaze. The trackers on the HMD tell the CPU where you are looking, and the CPU sends the right images to your HMD’s screens

Every tracking system has a device that generates a signal, a sensor that detects the signal and a control unit that processes the signal and sends information to the CPU. Some systems require you to attach the sensor component to the user (or the user’s equipment). In that kind of system, you place the signal emitters at fixed points in the environment. Some systems are the other way around, with the user wearing the emitters while surrounded by sensors attached to the environment.

The signals sent from emitters to sensors can take many forms, including electromagnetic signals, acoustic signals, optical signals and mechanical signals. Each technology has its own set of advantages and disadvantages.


Magnetic trackers are the most often used tracking devices in immersive applications.Measure magnetic fields generated by running an electric current sequentially through three coiled wires arranged in a perpendicular orientation to one another. Each small coil becomes an electromagnet, and the system’s sensors measure how its magnetic field affects the other coils. This measurement tells the system the direction and orientation of the emitter. A good electromagnetic tracking system is very responsive, with low levels of latency.

One disadvantage of this system is that anything that can generate a magnetic field can interfere in the signals sent to the sensors.


Emit and sense ultrasonic sound waves to determine the position and orientation of a target. Most measure the time it takes for the ultrasonic sound to reach a sensor. Usually the sensors are stationary in the environment — the user wears the ultrasonic emitters. The system calculates the position and orientation of the target based on the time it took for the sound to reach the sensors.

Disadvantages: Sound travels relatively slowly, so the rate of updates on a target’s position is similarly slow. The environment can also adversely affect the system’s efficiency because the speed of sound through air can change depending on the temperature, humidity in the environment.


Use light to measure a target’s position and orientation. The signal emitter in an optical device typically consists of a set of infrared LEDs. The sensors are cameras that can sense the emitted infrared light. The LEDs light up in sequential pulses. The cameras record the pulsed signals and send information to the system’s processing unit.

Disadvantages: Infrared radiation can also make a system less effective.


 Rely on a physical connection between the target and a fixed reference point. A common example of a mechanical tracking system in the VR field is the BOOM display. A BOOM display is an HMD mounted on the end of a mechanical arm that has two points of articulation. The system detects the position and orientation through the arm. The update rate is very high with mechanical tracking systems, but the disadvantage is that they limit a user’s range of motion.


As the technologies of virtual reality evolve, the applications of VR become literally unlimited. It is assumed that VR will reshape the interface between people and information technology by offering new ways for the communication of information, the visualization.

Two approaches to current VR development:

  • Modeling The Real World
  • Abstract Visualization.



An area in which virtual reality has tremendous potential is in architectural design. Already being created are architectural that allow designers and clients to examine homes and office buildings, inside and out, before they’re built. With virtual reality, designers can interactively test a building before construction begins.


The military have long been supporters of VR technology and development. Training programs can include everything from vehicle simulations to squad combat. On the whole, VR systems are much safer and, in the long run, less expensive than alternative training methods. Soldiers who have gone through extensive VR training have proven to be as effective as those who trained under traditional conditions.


For years now, virtual environments have been used to treat anxiety problems with exposure therapy. Psychologists treat phobias and post traumatic stress disorder by exposing the patient to the thing that causes them anxiety and letting the anxiety dissipate on its own. But this proves difficult if your stressor is a battlefield in Iraq. Military psychologists use simulated Iraq war situations to treat soldiers. Other therapeutic VR uses include treating a fear of flying, fear of elevators, and even a “virtual nicotine craving” simulator for smoking addiction.


Virtual reality environments have also been used for training simulators. The earliest examples were flight simulators (“Microsoft Flight Simulator”), but VR training has expanded beyond just that. There are many modern military examples, including Iraqi cultural situations and battlefield simulators for soldiers.

Flight simulators are a good example of a VE system that is effective within strict limits. In a good flight simulator, a user can take the same flight path under a wide range of conditions. Users can feel what it’s like to fly through storms, thick fog or calm winds. Realistic flight simulators are effective and safe training tools, and though a sophisticated simulator can cost tens of thousands of dollars, they’re cheaper than an actual aircraft (and it’s tough to damage one in an accident). The limitation of flight simulators from a VR perspective is that they are designed for one particular task. You can’t step out of a flight simulator and remain within the virtual environment, nor can you do anything other than pilot an aircraft while inside one.


Virtual reality (VR) can be described as a cutting-edge technology that allows students to step through the computer or television screen into a three dimensional, computer-simulated world to learn.


One result of virtual-reality research is the existence of entirely separate virtual worlds, inhabited entirely by the avatars of real world users. These worlds are sometimes referred to as massively multiplayer online games, and the World of Warcraft is the largest virtual gaming world in use now, with 11.5 million subscribers.


Probably the most successful cousin of virtual reality on the market today is the Nintento Wii. The Wii owes its motion capture and intuitive interaction concepts to the virtual reality technologies of the past. The controller is basically a simplified version of the “virtual reality glove.” Both the Wiimote and the Wii Fit offer users another way of interacting with their virtual environment without having to wear any bulky equipment.


Modern medicine has also found many uses for virtual reality. Doctors can interact with virtual systems to practice procedures or to do tiny surgical procedures on a larger scale. Surgeons have also started using virtual “twins” of their patients, to practice for surgery before doing the actual procedure. In medicine, staff can use virtual environments to train in everything from surgical procedures to diagnosing a patient. Surgeons have used virtual reality technology to not only train and educate, but also to perform surgery remotely by using robotic devices.

Researchers are using virtual reality technology to create 3-D ultrasound images to help doctors diagnose and treat congenital heart defects in children.


 The other most commonly found approach to VR application is in those areas where large quantities of abstract data need to be manipulated, examined or accessed. Such visualizations range from common datasets such as maps, to micro and macro structures such as molecular architecture or social networks. By combining VR with Geographical Information Systems (GIS), geographical information can be explored in three dimensions or the information contained within a computer database can be visualized and navigated.

Almost any situation that requires interaction with information (even mathematical algorithms can benefit from VR visualization. Users are able to visualize and interact with information through multi-dimensional graphical representations (combined with text clues). Such representations increase users’ ability to analyze the underlying data by negating the need for them to construct their own mental image of the data.


As the number of applications of virtual reality (VR) has grown, there have also been changes in the different formats of VR-type software. Each format has differing approaches to, and varying degrees of, three-dimensionality, immersion and interaction.


­Some programmers envision the Internet developing into a three-dimensional virtual space, where you navigate through virtual landscapes to access information and entertainment. Web sites could take form as a three-dimensional location, allowing users to explore in a much more literal way than before. Programmers have developed several different computer languages and Web browsers to achieve this vision. Some of these include:

  • Virtual Reality Modeling Language (VRML) – the earliest three-dimensional modeling language for the Web.
  • 3DML – a three-dimensional modeling language where a user can visit a spot (or Web site) through most Internet browsers after installing a plug-in.
  • X3D – the language that replaced VRML as the standard for creating virtual environments in the Internet.
  1. X3D superseded VRML97. Since VRML97 is a subset of the X3D standard, VRML     files can still be processed by newer X3D browsers.
  • Collaborative Design Activity (COLLADA) – a format used to allow file interchanges within three-dimensional programs.


  • Bottleneck of transmission bandwidth
  • 3-D visualization technology closely integrated with the data warehouse
  • Preserve the integrity of the database in a shared user environment


  • Virtual Theme Park
  • Virtual Shopping Mall
  • Real-time Conferencing
  • Flight Simulation
  • Gaming Experience


Three-dimensional (3-D), multi-user, online environments constitute a revolution of interactivity by creating a compelling online experience.

VE offers e-shoppers the ability study the product carefully.

Provides the e-shoppers confidence that what they see is actually what they will get. Give better description on product.


Tele-education, telemedicine, Tele-banking, Tele-work becomes possible. It improves new ways for people to interact with each other and computer.

Application of VR and Telecommunication

  • Telemedicine
  • Tele-education
  • Tele-training
  • Tele-banking
  • Tele-work


Using VR to manage Broadband Telecommunication Networks

  • VR user interfaces for broadband network
  • Allows network structure, information flow to be visualized
  • So, immediately responds through VR, reduce error
  • Act as though in the real world using data gloves.


Most of today’s VR applications do not conform to reality and have poor quality, but are still very useful but must be improved a lot to allow more comfortable and intuitive

Interaction with virtual worlds.

The big challenges in the field of virtual reality are developing better tracking systems, finding more natural ways to allow users to interact within a virtual environment and decreasing the time it takes to build virtual spaces. While there are a few tracking system companies that have been around since the earliest days of virtual reality, most companies are small and don’t last very long.

 The major interest was paid to visual feedback and visual display technologies resolution is

Significantly below eye’s resolving capability, luminance and color ranges do not cover the whole eye’s perception range (brightness range and gamut respectively), and finally the field of

View is relatively narrow. All these disadvantages make virtual worlds appear “artificial” and unreal, which severely contributes to the simulator sickness.

 Without well-designed hardware, a user could have trouble with his sense of balance or inertia with a decrease in the sense of telepresence, or he could experience cyber sickness, with symptoms that can include disorientation and nausea. Not all users seem to be at risk for cyber sickness — some people can explore a virtual environment for hours with no ill effects, while others may feel queasy after just a few minutes

­Some psychologists are concerned that immersion in virtual environments could psychologically affect a user.


Technology has transformed the world in which we live, changing how we spend our time, how we understand ourselves, and how we interact with others. Technological innovation results in social and economic change. Thus, VR will lead to the development of a Virtual World. And it is the Virtual World that promises to restructure human life and activity.