HoloMap
This project is already completed.

Exposé
HoloMap: Investigating Collaborative Navigation Techniques for a Virtual Museum using a Large-Scale Tracking System.

Motivation
Museums offer a rich and educational environment where people can get immersed into and explore unknown worlds to develop their own curiosities, creativity, critical thinking, and a connection to the world around them (Gross, 2014). Many museums already have implemented or are currently working on implementing virtual reality(VR) applications for conventional exhibitions (Tsapatori et al., 2003) or life like Bronze Age experiences (British Museum, 2015). With a sound application of VR museums are able to create new types of interactive exhibitions, extend their existing real exhibitions or combine interactive and non-interactive sections dynamically.
In cooporation with the Fraunhofer Institute for Integrated Circuits IIS in Nuremberg we have a massive motion tracking system, the Holodeck 4.0, available for development and testing of interaction techniques. The system is offering us the opportunity to explore the use of VR in museums, its possibilities and limitations. The Holodeck 4.0 in Nuremberg covers an area of 1400 square meters and up to 100 3-degrees-of-freedom(DOF) markers can be tracked at the same time (Fraunhofer IIS, 2017). Combined with Samsung Gear devices the system allows 6DOF headtracking. In order to keep the set-up simple, affordable, unintrusive, uninvasive and fast to equip it is currently practicable to limit the motiontracking of users in the environemt to one tracked marker per user for headtracking built into or attached to a Head-Mounted Display(HMD).

Fig. 1: Holodeck 4.0 at Nuremberg
A VR application, irrespective of its objective, is composed of content and a framework handling input and output. To ensure that the VR is usable and serves its purpose the framework is responsible for providing interaction techniques which a user can understand and utilise. The content in the domain of museums can be very heterogeneous and it can change constantly, most people organise their visit and find their way through a museum as a group (a study conducted by the Smithsonian Institution suggests that only 14% of museum visitors do it on their own (Smithsonian Institution Office of Policy and Analysis, 2004)). In addition, sometimes a group is lead by a museum guide who guides them through the museum telling them information and showing them exhibitions. However, in a virtual environment navigation and communication is more difficult than in reality. “Because output devices still cannot deliver information that fully matches the capabilities of the human perceptual system, they can have a negative impact on wayfinding” (Bowman, 2004). Furthermore people cannot use their body movement to indicate paths or positions, which impacts on the interpersonal communication of navigation issues or guiding information. Therefore, one important question is how visitors and guides are going to navigate and communicate in pure VR museum.
Given the possibilities and limitations of state of the art large-scale VRs and the requirements of the domain of application the interaction techniques in a virtual museum should support the following scenarios:
- Users should be able to organise their visit within the virtual environment
- Users should find their way through the environment or to particular positions
- Users should be able to show other users particular exhibits
- Users should be able to lead other users among a path or to a particular position
In this work we focus on:
- exploring what 3D interaction techniques can help collaborative navigation
- when visiting highly heterogeneous and large virtual environments,
- (such as a virtual museum)
- and using a large-scale tracking system with limited capabilities?
Related Work
In large scale virtual environments one important interaction technique used for navigating is the world-in-miniature(WIM) metaphor. “WIM can be considered a 3D generalisation of the traditional overview maps that are often used in 3D games”(Bowman, 2004).

Fig. 2: WIM from Pausch, R. et al., 1995
The original idea of WIM is to serve as “single unifying metaphor for such application independent interaction techniques as object selection, navigation, path planning, and visualisation”(Stoakley, Conway & Pausch, 1995). This technique has not been fully investigated yet in terms of collaborative navigation and several visualisation aspects influencing navigation tasks.
Interesting aspects to explore are:
Rendering Dimension: 2D vs. 3D
We do live in a 3D world and with HMDs 3D worlds can be simultated. But for navigation we use 2D maps or maps that are displayed on a 2D screen. Originally the WIM is visualised in 3D but a 2D version could be more suitable for the original applications of the WIM.
Rendering Quality: Maximum vs. Abstract
For exploration and search tasks humans unconsciously build and use a cognitive map consisting of landmarks, routes and survey knowledge (Siegel & White, 1975; Thorndyke & Goldin, 1983; Darken & Petersen, 1998). The original WIM is a scaled version of the whole environment. An abstract scaled version of the environment could be more suitable for the original applications of the WIM.
Communication Technique: Unimodal vs. Multimodal
In reality groups use face-to-face communication to organise and argue about their objectives (Pausch, Burnette, Brockway & Weiblen 1995). In a virtual environment the available interaction techniques, defined by the vr framework, determine how the users can communicate with each other. Without the presence of handtracking data or any other data that can represent gestures the only channel for communication is the users voice. But sometimes is can be hard to describe navigation information with your voice. Assuming there is a touch screen available users can use to indicate points or paths on the WIM, we want to test, if the multimodal interaction consisting of voice and indication technique supports the communication of navigation issues.
Approach
The approach of is project is to develop an experiment apparatus and run different experiments exploring possible interaction techniques. The project is divided into five phases: Experiment Preparation, 3 Experiment Execution phases and Evaluation and Documentation. In the Experiment Execution - phases the aspects 2D vs. 3D, Maximum vs. Abstract and Unimodal vs. Multimodal will be explored. Each phase will take one month execution duration. One month of the 6 months master thesis makespan is scheduled for unforseen circumstances.
Phase 1: Experiment and System Preparation
- Phase Duration: 1 Months
- Tasks:
- cap off the literature research on the topics of collaboration, navigation and communication in VR
- develop a state of the art virtual reality testing platform for experiments reproducing the Holodeck 4.0s limitations using the Vicon tracking system in combination with Samsung Gear devices as well as including a touch table for the exploration of communication techniques
- draw up the general experiment design consisting of accurate definitions of participants’ tasks, dependent and independent variables, appropriate measurement methods, a bias-control strategy and a general experiment protocol
- run and analyse a pilot study to ensure overall system functionality and performance
- patch the testing platform based on the results from the pilot study
Phase 2-4: Experiment Execution
- Phase Duration: 1 Month
- Description:
In phases 2 to 4 we will conduct experiments measuring the usability, effectiveness and efficiency of the respective solution using objective and subjective measurements within a between-subject experiment design - Tasks:
- implement the experiment respective the experiment conditions
- conduct and analyse pilot study with one experiment run
- patch the testing platform based on the results from the pilot study
- conduct and analyse the experiment
- update report
Phase 5: Evaluation and Writing Report
- Phase Duration: 1 Month
- Tasks:
- overall evaluation of experiments and comparison of the results
- documentation of results
- finalise report

Fig. 3: project plan
References
Bowman, D. A., Kruijff, E., LaViola, J. J., & Poupyrev, I. (2004). 3D User Interfaces: Theory and Practice.
British Museum. (2015). Virtual reality weekend at the British Museum. Retrieved from https://www.britishmuseum.org/about_us/news_and_press/press_releases/2015/virtual_reality_weekend.aspx
Clark, H. H., Brennan, S. E., Resnick, L. B., Levine, J. M., & Teasley, S. D. (1991). Grounding in communication perspectives on socially shared cognition (pp. 127‐149). Washington, DC, US: American Psychological Association.
Darken, R. P., & Peterson, B. (2002). Spatial Orientation and Wayfinding: Calhoun, the NPS Institutional Archive. Retrieved from https://calhoun.nps.edu/handle/10945/46753
Fraunhofer IIS (2017). Holodeck 4.0 Virtual Reality. Retrieved from https://www.iis.fraunhofer.de/en/ff/lok/proj/holodeck.html
Pausch, R., Burnette, T., Brockway, D., & Weiblen, M. E. (1995). Navigation and locomotion in virtual worlds via flight into hand-held miniatures. In S. G. Mair (Ed.), Proceedings of the 22nd annual conference on Computer graphics and interactive techniques (pp. 399–400). New York, NY: ACM. doi:10.1145/218380.218495
Rebecca Gross. (2014). The Importance of Taking Children to Museums. Retrieved from https://www.arts.gov/art-works/2014/importance-taking-children-museums
Siegel, A. W., & White, S. H. (1975). The Development of Spatial Representations of Large-Scale Environments. In Hayne W. Reese (Ed.), Advances in Child Development and Behavior (pp. 9–55). JAI. doi:10.1016/S0065-2407(08)60007-5
Smithsonian Institution Office of Policy and Analysis. (2004). Results of the 2004 Smithsonian-wide Survey of Museum Visitors.
Stoakley, R., Conway, M. J., & Pausch, R. (1995). Virtual reality on a WIM. In I. R. Katz (Ed.), Human factors in computing systems. CHI ‘95 conference proceedings (pp. 265–272). New York, New York, USA: ACM Press. doi:10.1145/223904.223938
Thorndyke, P. W., & Goldin, S. E. (2012). Spatial Learning and Reasoning Skill. In H. L. Pick & L. P. Acredolo (Eds.), Spatial Orientation. Theory, Research, and Application (pp. 195–217). Boston, MA: Springer Verlag. doi:10.1007/978-1-4615-9325-6_9
TSAPATORI, M. et al. 2003. ORION Research Roadmap for the European archaeological museums’ sector (Final Edition). June 2003. http://www.orion-net.org/
Contact Persons at the University Würzburg
Dr. Jean-Luc Lugrin (Primary Contact Person)HCI, Würzburg University
jean-luc.lugrin@uni-wuerzburg.de