Human-Computer Interaction
Summer EXPO 2025 Recap
The Summer EXPO 2025 for HCI/HCS, CS and GE was a great success! A large number of visitors were able to experience up to 120 different demos and projects.
Summer Expo 2025 Invitation
We invite you to this year's summer EXPO on the 25th of July!
6.5 Million Euros for Research on Digital Storytelling: HCI Group Joins New DFG Research Training Group TESDA
How do immersive technologies change the way we perceive and understand stories? The new DFG Research Training Group TESDA investigates this question. Prof. Marc Erich Latoschik and his team from the HCI Group are significantly involved with three subprojects on storytelling in immersive virtual reality.
AIL AT WORK @ denkbares Networking Day
The AIL team contributed to the event with three demonstrators, two questionnaires, and training material on AI literacy.
Show more

Recent Publications

Andrea Zimmerer, Lydia Bartels, Marc Erich Latoschik, The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application, In IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE Computer Society, 2025. (accepted for publication at ISMAR2025)
[BibSonomy]
@inproceedings{zimmerer2025feedback, author = {Andrea Zimmerer and Lydia Bartels and Marc Erich Latoschik}, year = {2025}, booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)}, publisher = {IEEE Computer Society}, title = {The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application} }
Abstract: Virtual reality (VR) exercise applications are promising tools, e.g., for at-home training and rehabilitation. However, existing applications vary significantly in key design choices such as environments, embodiment, and virtual coaching, making it difficult to derive clear design guidelines. A prominent design choice is the use of embodied virtual coaches, which guide user interaction and provide feedback. In a user study with 76 participants, we investigated how different levels of performance specificity in feedback from an embodied virtual coach affect intermediate factors, such as VR experience, motivation, and coach perception. Participants performed lower-body movement exercises, i.e., Leg Raises and Knee Extensions, commonly used in knee rehabilitation. We found that highly performance-specific feedback led to higher scores compared to medium specificity for perceived realism, as well as the anthropomorphism and sympathy of the virtual coach, but did not affect motivation. Based on our findings, we propose the design suggestion to include precise, performance-specific details when creating feedback for a virtual coach. We observed a descriptive pattern of higher scores in the low specificity condition compared to the medium condition on most measures, which raises the possibility that less specific feedback may, in some cases, be perceived more positively than moderately specific feedback. These findings provide valuable insights into how design choices impact relevant intermediate factors that are crucial for maximizing both workout effectiveness and the quality of the virtual coaching experience.
Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik, The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR, In IEEE Transactions on Visualization and Computer Graphics. 2025. To be published
[BibSonomy]
@article{merz2025impact, author = {Christian Merz and Niklas Krome and Carolin Wienrich and Stefan Kopp and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2025}, title = {The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR} }
Abstract: This study explores how people interact in dyadic social eXtended Reality (XR), focusing on two main factors: the animation type of a conversation partner’s avatar and how immersed the user feels in the virtual environment. Specifically, we investigate how 1) idle behavior, 2) AI-generated gestures, and 3) motion-captured movements from a confederate (a controlled partner in the study) influence the quality of conversation and how that partner is perceived. We examined these effects in both symmetric interactions (where both participants use VR headsets and controllers) and asymmetric interactions (where one participant uses a desktop setup). We developed a social XR platform that supports asymmetric device configurations to provide varying levels of immersion. The platform also supports a modular avatar animation system providing idle behavior, real-time AI-generated co-speech gestures, and full-body motion capture. Using a 2×3 mixed design with 39 participants, we measured users’ sense of spatial presence, their perception of the confederate, and the overall conversation quality. Our results show that users who were more immersed felt a stronger sense of presence and viewed their partner as more human-like and believable. Surprisingly, however, the type of avatar animation did not significantly affect conversation quality or how the partner was perceived. Participants often reported focusing more on what was said rather than how the avatar moved.
Ronja Heinrich, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik, A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces, In Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25). Association for Computing Machinery, 2025.
[BibSonomy]
@inproceedings{heinrich2025systematic, author = {Ronja Heinrich and Chris Zimmerer and Martin Fischbach and Marc Erich Latoschik}, year = {2025}, booktitle = {Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)}, publisher = {Association for Computing Machinery}, title = {A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces} }
Abstract: This systematic review investigates the current state of research on multimodal fusion methods, i.e., the joint analysis of multimodal inputs, for intentional, instruction-based human-computer interactions, focusing on the combination of speech and spatially expressive modalities such as gestures, touch, pen, and gaze. We examine 50 systems from a User-Centered Design perspective, categorizing them by modality combinations, fusion strategies, application domains and media, as well as reusability. Our findings highlight a predominance of descriptive late fusion methods, limited reusability, and a lack of standardized tool support, hampering rapid prototyping and broader applicability. We identify emerging trends in machine learning-based fusion and outline future research directions to advance reusable and user-centered multimodal systems.
Andreas Halbig, Marc Erich Latoschik, The Interwoven Nature of Spatial Presence and Virtual Embodiment: A Comprehensive Perspective, In Frontiers in Virtual Reality, Vol. 6. 2025.
[Download] [BibSonomy] [Doi]
@article{halbig-interwoven, author = {Andreas Halbig and Marc Erich Latoschik}, journal = {Frontiers in Virtual Reality}, url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1616662/full}, year = {2025}, volume = {6}, doi = {10.3389/frvir.2025.1616662}, title = {The Interwoven Nature of Spatial Presence and Virtual Embodiment: A Comprehensive Perspective} }
Abstract:
Samuel Truman, Sebastian Von Mammen, VIA-VR: A Platform to Streamline the Development of Virtual Reality Serious Games for Healthcare, In 2025 IEEE 38th International Symposium on Computer-Based Medical Systems (CBMS), pp. 463-468. 2025.
[Download] [BibSonomy] [Doi]
@inproceedings{11058787, author = {Samuel Truman and Sebastian Von Mammen}, url = {https://ieeexplore.ieee.org/document/11058787}, year = {2025}, booktitle = {2025 IEEE 38th International Symposium on Computer-Based Medical Systems (CBMS)}, pages = {463-468}, doi = {10.1109/CBMS65348.2025.00098}, title = {VIA-VR: A Platform to Streamline the Development of Virtual Reality Serious Games for Healthcare} }
Abstract:
Sebastian Oberdörfer, Melina Heinisch, Tobias Mühling, Verena Schreiner, Sarah König, Marc Erich Latoschik, Ready for VR? Assessing VR Competence and Exploring the Role of Human Abilities and Characteristics, In Frontiers in Virtual Reality. 2025.
[Download] [BibSonomy]
@article{oberdorfer2025ready, author = {Sebastian Oberdörfer and Melina Heinisch and Tobias Mühling and Verena Schreiner and Sarah König and Marc Erich Latoschik}, journal = {Frontiers in Virtual Reality}, url = {http://downloads.hci.informatik.uni-wuerzburg.de/2025-oberdoerfer-frontiers-vr-competence-preprint.pdf}, year = {2025}, title = {Ready for VR? Assessing VR Competence and Exploring the Role of Human Abilities and Characteristics} }
Abstract: The use of VR for educational purposes provides the opportunity for integrating VR applications into assessments or graded examinations. Interacting with an VR environment requires specific human abilities, thus suggesting the existence of a VR competence. With regard to the emerging field of VR-based examinations, this VR competence might influence a candidate's final grade and hence should be taken into account. In this paper, we proposed and developed a VR competence assessment application. The application features eight individual challenges that are based on generic 3D interaction techniques. In a pilot study, we measured the performance of 18 users. By identifying significant correlations between VR competence score, previous VR experience and theoretically-grounded contributing human abilities and characteristics, we provide first evidence that our VR competence assessment is effective. In addition, we provide first data that a specific VR competence exists. Our analyses further revealed that mainly spatial ability but also immersive tendency correlated with VR competence scores. These insights not only allow educators and researchers to assess and potentially equalize the VR competence level of their subjects, but also help designers to provide effective tutorials for first-time VR users.
Peter Kullmann, Theresa Schell, Mario Botsch, Marc Erich Latoschik, Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality, In Frontiers in Virtual Reality, Vol. 6. 2025.
[Download] [BibSonomy] [Doi]
@article{kullmann2025eyetoeye, author = {Peter Kullmann and Theresa Schell and Mario Botsch and Marc Erich Latoschik}, journal = {Frontiers in Virtual Reality}, url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1594350}, year = {2025}, volume = {6}, doi = {10.3389/frvir.2025.1594350}, title = {Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality} }
Abstract: In co-located extended reality (XR) experiences, headsets occlude their wearers’ facial expressions, impeding natural conversation. We introduce two techniques to mitigate this using off-the-shelf hardware: compositing a view of a personalized avatar behind the visor (“see-through visor”) and reducing the headset’s visibility and showing the avatar’s head (“head substitution”). We evaluated them in a repeated-measures dyadic study (N = 25) that indicated promising effects. Collaboration with a confederate with our techniques, compared to a no-avatar baseline, resulted in quicker consensus in a judgment task and enhanced perceived mutual understanding. However, the avatar was also rated and commented on as uncanny, though participant comments indicate tolerance for avatar uncanniness since they restore gaze utility. Furthermore, performance in an executive task deteriorated in the presence of our techniques, indicating that our implementation drew participants’ attention to their partner’s avatar and away from the task. We suggest giving users agency over how these techniques are applied and recommend using the same representation across interaction partners to avoid power imbalances.
S. Müller, A. Müller, S. Truman, T. Buhl, S. von Mammen, T. Brixner, femtoPro: Teaching and Training of Ultrafast Optics in Virtual Reality, In 2025 IEEE Conference on Education and Training in Optics and Photonics (ETOP), pp. 1-4. 2025.
[BibSonomy] [Doi]
@inproceedings{11030702, author = {S. Müller and A. Müller and S. Truman and T. Buhl and S. von Mammen and T. Brixner}, year = {2025}, booktitle = {2025 IEEE Conference on Education and Training in Optics and Photonics (ETOP)}, pages = {1-4}, doi = {10.1109/ETOP64842.2025.11030702}, title = {femtoPro: Teaching and Training of Ultrafast Optics in Virtual Reality} }
Abstract:
See all publications here
Legal Information