next up previous contents index
Next: 10.6 Applications in Geography, Up: 10. Interactive and Dynamic Previous: 10.4 Graphical Software

Subsections



10.5 Interactive 3D Graphics

A natural extension of 2D interactive and dynamic graphics is the use of anaglyphs and stereoscopic displays on a computer screen and eventually the use of VR environments to obtain a 3D representation of statistical data and linked objects from geography or medicine.


10.5.1 Anaglyphs

A German teacher, Wilhelm Rollmann, initially described the effect of stereoscopic graphics drawn in red and green colors that are looked at with the naked eye ([131]), i.e., what is now called free-viewing stereoscopic images. Later the same year, [132] describes the effect of looking at such colored pictures using filter glasses of corresponding complementary colors. As a reminder, red and green are complementary colors in the conventional color model whereas red and cyan are complementary colors in the RGB color model used for most computer monitors. Eventually, the work by Wilhelm Rollmann has been judged by [178] and [133] as the birth of anaglyphs. The mathematics underlying anaglyphs and stereoscopic displays can be found in [88] and [184] for example.

Stereoscopic displays and anaglyphs have been used within statistics by Daniel B. Carr, Richard J. Littlefield, and Wesley L. Nicholson (Carr et al., [32], [36]; [31], [34]). In particular anaglyphs can be considered an important means to represent three-dimensional pictures on flat surfaces. They have been used in a variety of sciences but found only little use in statistics. One of the first implementations of red-green anaglyphs was the ''real-time rotation of three-dimensional scatterplots'' in the Mason Hypergraphics software package, described in [17], page 125. Independently from the work on anaglyphs conducted in the U.S., interactive statistical anaglyph programs also were developed by Franz Hering, Jürgen Symanzik, and Stephan von der Weydt at the Fachbereich Statistik, University of Dortmund ([87], [86], [146], [147], [148]; [85]).

[185] is one of the rare sources in statistics where anaglyphs are used in the papers of [8], [36], and [81]. Moreover, [185] seems to be the first statistical reference where colored (red-green) anaglyphs have been published in print.

Figure 10.5: Screenshots of the ''Places'' data in VRGobi, previously published in [153]. A map view (left) and a three-dimensional point cloud displaying HousingCost, Climate, and Education are shown (right). The control panel, glyph types, and the boundary box that delimits the plot area are visible (top row). Cities with nice Climate and high HousingCost have been brushed and happen to fall into California (middle row). Among the brushed points is one city (San Francisco) with an outstanding value for Education (bottom row). When running VRGobi in the C2 (instead of producing screenshots from one of the control monitors), the rendered arm may be replaced by a human user who is possibly wearing a data glove
\includegraphics[width=11.7cm]{text/2-10/fig5.eps}


10.5.2 Virtual Reality

Many different definitions of the term VR can be found throughout the literature. [62] summarizes several possible definitions of VR, including the following working definition for this chapter: ''Virtual reality refers to immersive, interactive, multi-sensory, viewer-centered, three-dimensional computer generated environments and the combination of technologies required to build these environments.'' A brief chronology of events that influenced the development of VRcan be found in [62]. A more detailed overview on VRcan be found in [128] or [177] for example.

Carolina Cruz-Neira and her colleagues developed an ambitious visualization environment at the Electronic Visualization Lab (EVL) of the University of Illinois in Chicago, known simply as the CAVE (Cruz-Neira et al., [66], [64], [65]; [63], [135]). The abbreviation CAVE stands for CAVE Audio Visual Experience Automatic Virtual Environment. Carolina Cruz-Neira moved to ISU in 1995 where she was involved in the development of a second, larger CAVE-like environment known as the C2. The CAVE, C2, and several other of its successors belong to immersive projection technology (IPT) systems where the user is visually immersed within the virtual environment.

The use of ISU's C2 for statistical visualization is based on the framework of three-dimensional projections of $ p$-dimensional data, using as a basis the methods developed and available in XGobi. The implementation of some of the basic XGobi features in the C2 resulted in VRGobi (see Fig. 10.5). The main difference between XGobi and VRGobi is that the XGobi user interface is rather like a desktop with pages of paper whereas VRGobi is more like having the whole room at the user's disposal for the data analysis.

VRGobi and the statistical visualization in the C2 have been extensively explored and documented in the literature (Symanzik et al., [153], [154]; Cook et al., [56], [55]; Nelson et al., [124], [125]; [51]). Main developers of VRGobi, over time, were Dianne Cook and Carolina Cruz-Neira, with major contributions by Brad Kohlmeyer, Uli Lechner, Nicholas Lewin, Laura Nelson, and Jürgen Symanzik. Additional information on VRGobi can be found at http://www.las.iastate.edu/news/Cook0219.html.

The initial implementation of VRGobi contains a three-dimensional grand tour. Taking arbitrary three-dimensional projections can expose features of the data not visible in one-dimensional or two-dimensional marginal plots.

One of the most difficult developments for VRGobi was the user interface (and not the statistical display components). While it is relatively simple to create popup menus that allow to select colors and symbols for brushing in a desktop environment, designing an appealing and operational three-dimensional interface for the C2 was a real challenge. Eventually, four main components make up VRGobi: the viewing box, the three-dimensional control panel, the variable spheres (similar to the variable circles used in XGobi), and possibly a map view.

A three-dimensional map view, if used, allows the user to explore data in its spatial context within VRGobi, similar to the ArcView/XGobi link (Cook et al., [57], [58]) for the desktop.

IPT environments are remarkably different from display devices that are commonly available for data analysis. They extend beyond the small gain of one more dimension of viewing space, to being a completely defined ''real'' world space. In VRGobi, the temptation is to grab the objects or climb a mountain in the map view and to step aside when a point approaches our face during the grand tour. The objects surround the viewer and it is possible to walk through the data.

In Nelson ([124], [125]), experiments have been conducted on structure detection, visualization, and ease of interaction. Because only $ 15$ human subjects participated in these experiments, it could not be expected that statistically significant results were obtained. However, these experiments showed that there was a clear trend that the test subjects performed considerably better on visualization tasks in the C2 than with XGobi on the workstation display. In contrast, interaction tasks such as brushing provided better results for the workstation. However, subjects with some limited VR experiences already performed considerably better on the interaction tasks in the C2 than subjects with no prior VR experience, suggesting that there is some learning needed to effectively use the VR hardware.

The high cost factor of the CAVE, C2, and similar IPT environments motivated the development of the PC-based MiniCAVE environment. The MiniCAVE is an immersive stereoscopic projection-based VR environment developed at GMU. It is oriented toward group interactions. As such, it is particularly suited to collaborative efforts in scientific visualization, data analysis, and VDM.

Initially researchers began with a $ 333$ megahertz Pentium II machine running Windows NT. The SGI-based VR applications that make use of the OpenGL standard could be ported relatively easily to a PC environment. Using the Windows NT drivers, it was also possible to integrate the Crystal Eyes shutter glasses into the PC environment. The development of the MiniCAVE, now patented (Patent No. 6,448,965 ''Voice-Controlled Immersive Virtual Reality System'') to GMU, has been documented in [192] and [191].

The one-wall MiniCAVE with speech recognition has been implemented on a dual $ 450$ megahertz Pentium III machine at GMU. In addition, a polarized light LCD projector with both front and rear projection is used. Versions of ExplorN and CrystalVision have been ported to the MiniCAVE environment.

In addition to the work on VR-based data visualization conducted at ISU and GMU, independent work also has been conducted elsewhere, e.g., at Georgia Tech and the Delft Technical University, The Netherlands, resulting in the Virtual Data Visualizer ([176]), and at the University of South Carolina, using the Virtual Reality Modeling Language (VRML) for VR applications on the World Wide Web ([134]). [16] describe 3DVDM, a 3D VDM system, that is aimed at the visual exploration of large data bases. More details are available at http://www.cs.auc.dk/3DVDM.

[51] lists three fields, ''environmental studies, especially data having a spatial component; shape statistics; and manufacturing quality control'', that would benefit most from VR and other IPT environments. Certainly, recent experimental desktop links of VR and visualization software with spatial statistical applications such as the links between ViRGIS and $ \mathrm{RA}_3 \mathrm{DIO}$ with XGobi ([159,136]) would benefit considerably when being conducted in an IPT environment. In addition to the fields in [51], we think that medical, genetic, and biological statistical data would also considerably benefit when being explored in an IPT environment.


next up previous contents index
Next: 10.6 Applications in Geography, Up: 10. Interactive and Dynamic Previous: 10.4 Graphical Software