2025
Ronja Heinrich, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces
, In
Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)
.
Association for Computing Machinery
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{heinrich2025systematic,
title = {A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces},
author = {Heinrich, Ronja and Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)},
year = {2025},
publisher = {Association for Computing Machinery},
url = {https://dl.acm.org/doi/10.1145/3716553.3750790},
doi = {doi: 10.1145/3716553.3750790}
}
Abstract: This systematic review investigates the current state of research on multimodal fusion methods, i.e., the joint analysis of multimodal inputs, for intentional, instruction-based human-computer interactions, focusing on the combination of speech and spatially expressive modalities such as gestures, touch, pen, and gaze.
We examine 50 systems from a User-Centered Design perspective, categorizing them by modality combinations, fusion strategies, application domains and media, as well as reusability. Our findings highlight a predominance of descriptive late fusion methods, limited reusability, and a lack of standardized tool support, hampering rapid prototyping and broader applicability. We identify emerging trends in machine learning-based fusion and outline future research directions to advance reusable and user-centered multimodal systems.
Chris Zimmerer,
Multimodal Interaction in Virtual and Extended Reality
.
2025.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@phdthesis{Zimmerer2025,
title = {Multimodal Interaction in Virtual and Extended Reality},
author = {Zimmerer, Chris},
year = {2025},
url = {},
doi = {10.25972/OPUS-42565}
}
Andrea Zimmerer, Lydia Bartels, Marc Erich Latoschik,
The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application
, In
2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 1031-1041
.
IEEE Computer Society
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{zimmerer2025feedback,
title = {The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application},
author = {Zimmerer, Andrea and Bartels, Lydia and Latoschik, Marc Erich},
booktitle = {2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2025},
pages = {1031-1041},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ismar-feedback-from-a-virtual-coach-in-vr-exercise.pdf},
doi = {10.1109/ISMAR67309.2025.00110}
}
Abstract: Virtual reality (VR) exercise applications are promising tools, e.g., for at-home training and rehabilitation. However, existing applications vary significantly in key design choices such as environments, embodiment, and virtual coaching, making it difficult to derive clear design guidelines. A prominent design choice is the use of embodied virtual coaches, which guide user interaction and provide feedback. In a user study with 76 participants, we investigated how different levels of performance specificity in feedback from an embodied virtual coach affect intermediate factors, such as VR experience, motivation, and coach perception. Participants performed lower-body movement exercises, i.e., Leg Raises and Knee Extensions, commonly used in knee rehabilitation. We found that highly performance-specific feedback led to higher scores compared to medium specificity for perceived realism, as well as the anthropomorphism and sympathy of the virtual coach, but did not affect motivation. Based on our findings, we propose the design suggestion to include precise, performance-specific details when creating feedback for a virtual coach. We observed a descriptive pattern of higher scores in the low specificity condition compared to the medium condition on most measures, which raises the possibility that less specific feedback may, in some cases, be perceived more positively than moderately specific feedback. These findings provide valuable insights into how design choices impact relevant intermediate factors that are crucial for maximizing both workout effectiveness and the quality of the virtual coaching experience.
2022
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases
, In
CHI Conference on Human Factors in Computing Systems Extended Abstracts
.
New York, NY, USA
:
Association for Computing Machinery
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3491101.3503552,
title = {A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-chi-case-study-mmi-zimmerer.pdf},
doi = {10.1145/3491101.3503552}
}
Abstract: Multimodal Interfaces (MMIs) supporting the synergistic use of natural modalities like speech and gesture have been conceived as promising for spatial or 3D interactions, e.g., in Virtual, Augmented, and Mixed Reality (XR for short). Yet, the currently prevailing user interfaces are unimodal. Commercially available software platforms like the Unity or Unreal game engines simplify the complexity of developing XR applications through appropriate tool support. They provide ready-to-use device integration, e.g., for 3D controllers or motion tracking, and according interaction techniques such as menus, (3D) point-and-click, or even simple symbolic gestures to rapidly develop unimodal interfaces. A comparable tool support is yet missing for multimodal solutions in this and similar areas. We believe that this hinders user-centered research based on rapid prototyping of MMIs, the identification and formulation of practical design guidelines, the development of killer applications highlighting the power of MMIs, and ultimately a widespread adoption of MMIs. This article investigates potential reasons for the ongoing uncommonness of MMIs. Our case study illustrates and analyzes lessons learned during the development and application of a toolchain that supports rapid development of natural and synergistic MMIs for XR use-cases. We analyze the toolchain in terms of developer usability, development time, and MMI customization. This analysis is based on the knowledge gained in years of research and academic education. Specifically, it reflects on the development of appropriate MMI tools and their application in various demo use-cases, in user-centered research, and in the lab work of a mandatory MMI course of an HCI master’s program. The derived insights highlight successful choices made as well as potential areas for improvement.
René Stingl, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Are You Referring to Me? - Giving Virtual Objects Awareness
, In
2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
, pp. 671-673
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{9974498,
title = {Are You Referring to Me? - Giving Virtual Objects Awareness},
author = {Stingl, René and Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
year = {2022},
pages = {671-673},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-natural-pointing-preprint.pdf},
doi = {10.1109/ISMAR-Adjunct57072.2022.00139}
}
Abstract: This work introduces an interaction technique to determine the user’s non-verbal deixis in Virtual Reality (VR) applications. We tailored it for multimodal speech & gesture interfaces (MMIs). Here, non-verbal deixis is often determined by the use of ray-casting due to its simplicity and intuitiveness. However, ray-casting’s rigidness and dichotomous nature pose limitations concerning the MMI’s flexibility and efficiency. In contrast, our technique considers a more comprehensive set of directional cues to determine non-verbal deixis and provides probabilistic output to tackle these limitations. We present a machine-learning-based reference implementation of our technique in VR and the results of a first performance benchmark. Future work includes an in-depth user study evaluating our technique’s user experience in an MMI.
Chris Zimmerer, Philipp Krop, Martin Fischbach, Marc Erich Latoschik,
Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface
, In
CHI Conference on Human Factors in Computing Systems
.
New York, NY, USA
:
Association for Computing Machinery
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3491102.3502062,
title = {Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface},
author = {Zimmerer, Chris and Krop, Philipp and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {CHI Conference on Human Factors in Computing Systems},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://dl.acm.org/doi/10.1145/3491102.3502062},
doi = {10.1145/3491102.3502062}
}
Abstract: Multimodal Interfaces (MMIs) combining speech and spatial input have the potential to elicit minimal cognitive load. Low cognitive load increases effectiveness as well as user satisfaction and is regarded as an important aspect of intuitive use. While this potential has been extensively theorized in the research community, experiments that provide supporting observations based on functional interfaces are still scarce. In particular, there is a lack of studies comparing the commonly used Unimodal Interfaces (UMIs) with theoretically superior synergistic MMI alternatives. Yet, these studies are an essential prerequisite for generalizing results, developing practice-oriented guidelines, and ultimately exploiting the potential of MMIs in a broader range of applications. This work contributes a novel observation towards the resolution of this shortcoming in the context of the following combination of applied interaction techniques, tasks, application domain, and technology: We present a comprehensive evaluation of a synergistic speech & touch MMI and a touch-only menu-based UMI (interaction techniques) for selection and system control tasks in a digital tabletop game (application domain) on an interactive surface (technology). Cognitive load, user experience, and intuitive use are evaluated, with the former being assessed by means of the dual-task paradigm. Our experiment shows that the implemented MMI causes significantly less cognitive load and is perceived significantly more usable and intuitive than the UMI. Based on our results, we derive recommendations for the interface design of digital tabletop games on interactive surfaces. Further, we argue that our results and design recommendations are suitable to be generalized to other application domains on interactive surfaces for selection and system control tasks.
2020
Chris Zimmerer, Ronja Heinrich, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis
, In
26th ACM Symposium on Virtual Reality Software and Technology
.
New York, NY, USA
:
Association for Computing Machinery
, 2020.
Best Poster Award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3385956.3422089,
title = {Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis},
author = {Zimmerer, Chris and Heinrich, Ronja and Fischbach, Martin and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {26th ACM Symposium on Virtual Reality Software and Technology},
year = {2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
note = {Best Poster Award 🏆},
url = {https://doi.org/10.1145/3385956.3422089},
doi = {10.1145/3385956.3422089}
}
Abstract: This paper introduces a method for computing the difficulty of selection tasks in virtual environments using pointing metaphors by operationalizing an established human motor behavior model. In contrast to previous work, the difficulty is calculated automatically at run-time for arbitrary environments. We present and provide the implementation of our method within Unity 3D. The difficulty is computed based on a contextual analysis of spatial boundary conditions, i.e., target object size and shape, distance to the user, and occlusion. We believe our method will enable developers to build adaptive systems that automatically equip the user with the most appropriate selection technique according to the context. Further, it provides a standard metric to better evaluate and compare different selection techniques.
Chris Zimmerer, Erik Wolf, Sara Wolf, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality
, In
2020 International Conference on Multimodal Interaction
, p. 222–231
.
2020.
Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3382507.3418850,
title = {Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality},
author = {Zimmerer, Chris and Wolf, Erik and Wolf, Sara and Fischbach, Martin and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {2020 International Conference on Multimodal Interaction},
year = {2020},
pages = {222–231},
note = {Best Paper Nominee 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-icmi-1169-preprint.pdf},
doi = {10.1145/3382507.3418850}
}
Abstract: Multimodal Interfaces (MMIs) have been considered to provide promising interaction paradigms for Virtual Reality (VR) for some time. However, they are still far less common than unimodal interfaces (UMIs). This paper presents a summative user study comparing an MMI to a typical UMI for a design task in VR. We developed an application targeting creative 3D object manipulations, i.e., creating 3D objects and modifying typical object properties such as color or size. The associated open user task is based on the Torrence Tests of Creative Thinking. We compared a synergistic multimodal interface using speech-accompanied pointing/grabbing gestures with a more typical unimodal interface using a hierarchical radial menu to trigger actions on selected objects. Independent judges rated the creativity of the resulting products using the Consensual Assessment Technique. Additionally, we measured the creativity-promoting factors flow, usability, and presence. Our results show that the MMI performs on par with the UMI in all measurements despite its limited flexibility and reliability. These promising results demonstrate the technological maturity of MMIs and their potential to extend traditional interaction techniques in VR efficiently.
2019
Erik Wolf, Sara Klüber, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik,
"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR
, In
2019 International Conference on Multimodal Interaction
, pp. 195-204
.
2019.
Best Paper Runner-Up 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2019paint,
title = {"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR},
author = {Wolf, Erik and Klüber, Sara and Zimmerer, Chris and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {2019 International Conference on Multimodal Interaction},
year = {2019},
pages = {195-204},
note = {Best Paper Runner-Up 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-icmi-creativity-in-vr.pdf},
doi = {10.1145/3340555.3353724}
}
Abstract: Virtual reality (VR) has always been considered a promising medium to support designers with alternative work environments. Still, graphical user interfaces are prone to induce attention shifts between the user interface and the manipulated target objects which hampers the creative process. This work proposes a speech-and-gesture-based interaction paradigm for creative tasks in VR. We developed a multimodal toolbox (MTB) for VR-based design applications and compared it to a typical unimodal menu-based toolbox (UTB). The comparison uses a design-oriented use-case and mea-sures flow, usability, and presence as relevant characteristicsfor a VR-based design process. The multimodal approach (1) led to a lower perceived task duration and a higher reported feeling of flow. It (2) provided a higher intuitive use and a lower mental workload while not being slower than an UTB. Finally, it (3) generated a higher feeling of presence. Overall, our results confirm significant advantages of the proposed multimodal interaction paradigm and the developed MTB for important characteristics of design processes in VR.
2018
Martin Fischbach, Michael Brandt, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik, Birgit Lugrin,
Follow the White Robot - A Role-Playing Game with a Robot Game Master
, In
17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2018)
, pp. 1812-1814
.
ACM
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach:2018ab,
title = {Follow the White Robot - A Role-Playing Game with a Robot Game Master},
author = {Fischbach, Martin and Brandt, Michael and Zimmerer, Chris and Lugrin, Jean-Luc and Latoschik, Marc Erich and Lugrin, Birgit},
booktitle = {17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2018)},
year = {2018},
pages = {1812-1814},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-aamas-demo-white-robot-camera-ready-v2-preprint.pdf}
}
Abstract: We describe a social robot acting as a game master in an interactive tabletop role-playing game. The Robot Game Master (RGM) takes on the role of different characters, which the human players meet during the adventure, as well as of the narrator. The demonstration presents a novel software and hardware platform that allows the robot to (1) proactively lead through the storyline and to (2) react to changes in the ongoing game in real-time, while (3) fostering players\u0027 collaborations.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
, In
Multimodal Technologies and Interaction
, Vol.
2
(
4)
, p. 81ff.
.
MDPI
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{zimmerer:2018,
title = {Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
journal = {Multimodal Technologies and Interaction},
year = {2018},
volume = {2},
number = {4},
pages = {81ff.},
publisher = {MDPI},
url = {https://www.mdpi.com/2414-4088/2/4/81},
doi = {10.3390/mti2040081}
}
Abstract: Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Space Tentacles - Integrating Multimodal Input into a VR Adventure Game
, In
Proceedings of the 25th IEEE Virtual Reality (VR) conference
, pp. 745-746
.
IEEE
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{zimmerer2018space,
title = {Space Tentacles - Integrating Multimodal Input into a VR Adventure Game},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 25th IEEE Virtual Reality (VR) conference},
year = {2018},
pages = {745-746},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-space-tentacle-preprint.pdf}
}
Abstract: Multimodal interfaces for Virtual Reality (VR), e.g., based on speech and gesture input/output (I/O), often exhibit complex system architectures. Tight couplings between the required I/O processing stages and the underlying scene representation and the simulator system’s flow-of-control tend to result in high development and maintainability costs. This paper presents a maintainable solution for realizing such interfaces by means of a cherry-picking approach. A reusable multimodal I/O processing platform is combined with the simulation and rendering capabilities of the Unity game engine, allowing to exploit the game engine’s superior API usability and tool support. The approach is illustrated based on the development of a multimodal VR adventure game called Space Tentacles.
2017
Dennis Wiebusch, Chris Zimmerer, Marc Erich Latoschik,
Cherry-Picking RIS Functionality -- Integration of Game and VR Engine Sub-Systems based on Entities and Events
, In
10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
, pp. 1-8
.
IEEE Computer Society
, 2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wiebusch2017,
title = {Cherry-Picking RIS Functionality -- Integration of Game and VR Engine Sub-Systems based on Entities and Events},
author = {Wiebusch, Dennis and Zimmerer, Chris and Latoschik, Marc Erich},
booktitle = {10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2017},
pages = {1-8},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-ieeevr-searis-cherry-picking-preprint.pdf}
}
Abstract: Modern game engines provide a variety of high-end features and sub-systems which have made them increasingly interesting for AR/VR research. Here, it often is necessary to combine features from different sources. This paper presents an approach based on entity-event state decoupling and exchange. The approach targets the combination of sub-systems from different sources which simulate functionally coherent aspects of the virtual objects like physics, graphics, AI, or developer services like state editing. The approach decouples specific internal representations using a semantic description layer for identifiers, data types, and potential relations between them. We illustrate the main concepts using examples from the combination of the Unreal Engine 4, the Unity engine, and own research software and illustrate performance related aspects as a guideline for the choice of an appropriate transport layer.
2016
Sascha Link, Berit Barkschat, Chris Zimmerer, Martin Fischbach, Dennis Wiebusch, Jean-Luc Lugrin, Marc Erich Latoschik,
An Intelligent Multimodal Mixed Reality Real-Time Strategy Game
, In
Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
, pp. 223-224
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{2016:linkaa,
title = {An Intelligent Multimodal Mixed Reality Real-Time Strategy Game},
author = {Link, Sascha and Barkschat, Berit and Zimmerer, Chris and Fischbach, Martin and Wiebusch, Dennis and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
pages = {223-224},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-poster-xroads-manuscript-reduced-file-size.pdf}
}
Abstract: This paper presents a mixed reality tabletop role-playing game with a novel combination of interaction styles and gameplay mechanics. Our contribution extends previous approaches by abandoning the traditional turn-based gameplay in favor of simultaneous real-time interaction. The increased cognitive and physical load during the simultaneous control of multiple game characters is counteracted by two features: First, certain game characters are equipped with AI-driven capabilities to become semi-autonomous virtual agents. Second, (groups of) these agents can be instructed by high-level commands via a multimodal—speech and gesture—interface.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Maintainable Management and Access of Lexical Knowledge for Multimodal Virtual Reality Interfaces
, In
Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 347-348
.
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{zimmerer2016maintainable,
title = {Maintainable Management and Access of Lexical Knowledge for Multimodal Virtual Reality Interfaces},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2016},
pages = {347-348},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N40677}
}
Abstract: This poster presents a maintainable method to manage lexical information required for multimodal interfaces. It is tailored for the application in real-time interactive systems, specifically for Virtual Reality, and solves three problems commonly encountered in this context: (1) The lexical information is defined on and grounded in a common knowledge representation layer (KRL) based on OWL. The KRL describes application objects and possible system functions in one place and avoids error-prone redundant data management. (2) The KRL is tightly integrated into the simulator platform using a semantically enriched object model that is auto-generated from the KRL and thus fosters high performance access. (3) A well-defined interface provides application wide access to semantic application state information in general and the lexical information in specific, which greatly contributes to decoupling, maintainability, and reusability.
2014
Martin Fischbach, Chris Zimmerer, Anke Giebler-Schubert, Marc Erich Latoschik,
Exploring multimodal interaction techniques for a mixed reality digital surface (demo)
, In
IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 335-336
.
2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2014exploring,
title = {Exploring multimodal interaction techniques for a mixed reality digital surface (demo)},
author = {Fischbach, Martin and Zimmerer, Chris and Giebler-Schubert, Anke and Latoschik, Marc Erich},
booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2014},
pages = {335-336},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-ismar-fischbach-xroads-draft.pdf}
}
Abstract: Quest - XRoads is a multimodal and multimedia mixed reality version of the traditional role-play tabletop game Quest: Zeit der Helden. The original game concept is augmented with virtual content, controllable via auditory, tangible and spatial interfaces to permit a novel gaming experience and to increase the satisfaction while playing. The demonstration consists of a turn-based skirmish, where up to four players have to collaborate to defeat an opposing player. In order to be victorious, players have to control heroes or villains and use their abilities via speech, gesture, touch as well as tangible interactions.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Fusion of mixed reality tabletop and location-based applications for pervasive games
, In
Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces
, pp. 427-430
.
ACM
, 2014.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{zimmerer2014fusion,
title = {Fusion of mixed reality tabletop and location-based applications for pervasive games},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces},
year = {2014},
pages = {427-430},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N11771}
}
2013
Anke Giebler-Schubert, Chris Zimmerer, Thomas Wedler, Martin Fischbach, Marc Erich Latoschik,
Ein digitales Tabletop-Rollenspiel für Mixed-Reality-Interaktionstechniken
, In
Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR
Marc Erich Latoschik, Oliver Staadt, Frank Steinicke (Eds.),
, pp. 181-184
.
Shaker Verlag
, 2013.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{gieblerschubert2013digitales,
title = {Ein digitales Tabletop-Rollenspiel für Mixed-Reality-Interaktionstechniken},
author = {Giebler-Schubert, Anke and Zimmerer, Chris and Wedler, Thomas and Fischbach, Martin and Latoschik, Marc Erich},
editor = {Latoschik, Marc Erich and Staadt, Oliver and Steinicke, Frank},
booktitle = {Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR},
year = {2013},
pages = {181-184},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2013-vrar-ein-digitales-tabletop-rollenspiel-fuer-mixed-reality-interaktionstechniken.pdf}
}
Abstract: Dieser Artikel beschreibt die digitale Umsetzung eines rollenspielbasierten Brettspiels zur Exploration neuer Interaktionstechniken. Als gemeinsame Mixed-Reality-Spielumgebung dient ein Multitouch-Tisch mit Objekterkennung für haptisch erfassbare
Spielelemente (Spielfiguren, Karten, ...). Das System ergänzt die realen Objekte mit multimedialen Informationen gemäß des aktuellen Spielgeschehens. Die Integration tragbarer Endgeräte über eine HTML5 -Schnittstelle ermöglicht private und individualisierte Interaktionsbereiche.
Das System vereint unterschiedliche Interaktionstechniken wie Touch-Eingabe
und Interaktion mit greifbaren Objekten, um den Zufriedenheitsgrad bei Interaktionen positiv
zu beeinflussen. Eine Pilotstudie mit rollenspielerfahrenen Benutzern prüft die Akzeptanz
der neuen Spiel- und Interaktionsmöglichkeiten.