The Meditation Chamber was an immersive virtual environment that was originally created by long-time VR researchers Larry Hodges, Diane Gromala, Chris Shaw, and Fleming Seay.
In the Meditation Chamber, users sat in a comfortable, semi-reclining chair and experienced a VE that took them through three phases of a virtual experience. Prior to the first phase, users were first fitted with a head- mounted display (HMD) and three biometric sensors that measured galvanic skin response (GSR), respiration and heart rate. Once seated comfortably, users entered the first phase of the meditation chamber: as they were presented with a visual display of a sun (Figure 2), the system’s interactive “vocal coach” asked them to relax. The biofeedback device measured their GSR in real-time, which directly affected the imagery: as the user began to relax and their GSR declined, the rate at which the sun moved would increase until the sun went beneath the horizon giving way to a peaceful night scene, complete with chirping crickets. If the user was unable to become relaxed or their GSR increased, the sunset would slow down. The second part of this relaxation phase operated in the same way as the first, but depicted a moonrise instead of a sunset. As the user relaxed and lowered their GSR, the moon would rise higher and higher into the sky. The user’s GSR measure determined the frame-rate at which the sunset / moonrise animation would play. In this phase, users reported that they became aware of their intentional efforts to relax because they understood that the visuals were responding to their continuously changing physiological state.
In the second phase, users were taken through a set of muscle tension and relaxation exercises, again by the system’s vocal coach. 3D graphics of a human body were rendered and displayed from a first-person perspective (Figure 3). Thus, the 3D body that users saw corresponded to their physical body. The user was coached to flex, hold, and release a set of eight different muscle groups including the legs, arms, abdominals, and shoulders. Each muscle group sequence was accompanied by gender appropriate visuals depicting the described motion, usually from a first person perspective. This phase was not interactive, but instead asked the user to listen to the narrator’s instructions while mimicking the movement examples visually presented to them on the screen. The system’s creators and users noted that this was a strong and compelling illusion. The authors intend to expand upon this by including sensors on the users’ wrists, knees and feet to strengthen the illusion of a one-to-one correspondence, or an embodied “felt sense.”
In the third phase, users were taken through a guided meditation and breathing exercise, interacting with soothing visual imagery and ambient sound. As users approached what is considered an acceptable biometric approximation of a meditative state, the volume of the sound decreased, while the interactive visuals dissolved to black; often, users simply closed their eyes. After a prescribed amount of time meditating, the vocal coach gently suggested that users end their meditative session.