About me
Ronja completed her bachelor’s degree (B.Sc.) in E-Commerce at the University of Applied Sciences Würzburg-Schweinfurt (FHWS). She then enrolled in the master’s program in Human-Computer Interaction at the University of Würzburg. Since June 2021, she is working as a research assistant in the Human-Computer Interaction group. Her focus is on real-time interactive systems and multimodal interfaces.
2025
Ronja Heinrich, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces
, In
Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)
.
Association for Computing Machinery
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{heinrich2025systematic,
title = {A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces},
author = {Heinrich, Ronja and Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)},
year = {2025},
publisher = {Association for Computing Machinery},
url = {https://dl.acm.org/doi/10.1145/3716553.3750790},
doi = {doi: 10.1145/3716553.3750790}
}
Abstract: This systematic review investigates the current state of research on multimodal fusion methods, i.e., the joint analysis of multimodal inputs, for intentional, instruction-based human-computer interactions, focusing on the combination of speech and spatially expressive modalities such as gestures, touch, pen, and gaze.
We examine 50 systems from a User-Centered Design perspective, categorizing them by modality combinations, fusion strategies, application domains and media, as well as reusability. Our findings highlight a predominance of descriptive late fusion methods, limited reusability, and a lack of standardized tool support, hampering rapid prototyping and broader applicability. We identify emerging trends in machine learning-based fusion and outline future research directions to advance reusable and user-centered multimodal systems.
2020
Chris Zimmerer, Ronja Heinrich, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis
, In
26th ACM Symposium on Virtual Reality Software and Technology
.
New York, NY, USA
:
Association for Computing Machinery
, 2020.
Best Poster Award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3385956.3422089,
title = {Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis},
author = {Zimmerer, Chris and Heinrich, Ronja and Fischbach, Martin and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {26th ACM Symposium on Virtual Reality Software and Technology},
year = {2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
note = {Best Poster Award 🏆},
url = {https://doi.org/10.1145/3385956.3422089},
doi = {10.1145/3385956.3422089}
}
Abstract: This paper introduces a method for computing the difficulty of selection tasks in virtual environments using pointing metaphors by operationalizing an established human motor behavior model. In contrast to previous work, the difficulty is calculated automatically at run-time for arbitrary environments. We present and provide the implementation of our method within Unity 3D. The difficulty is computed based on a contextual analysis of spatial boundary conditions, i.e., target object size and shape, distance to the user, and occlusion. We believe our method will enable developers to build adaptive systems that automatically equip the user with the most appropriate selection technique according to the context. Further, it provides a standard metric to better evaluate and compare different selection techniques.
2019
Erik Wolf, Ronja Heinrich, Annabell Michalek, David Schraudt, Anna Hohm, Rebecca Hein, Tobias Grundgeiger, Oliver Happel,
Rapid Preparation of Eye Tracking Data for Debriefung in Medical Training: A Feasibility Study
, In
2019 Human Factors and Ergonomics Society Annual Meeting
, Vol.
63
(
1)
, pp. 733-737
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2019rapid,
title = {Rapid Preparation of Eye Tracking Data for Debriefung in Medical Training: A Feasibility Study},
author = {Wolf, Erik and Heinrich, Ronja and Michalek, Annabell and Schraudt, David and Hohm, Anna and Hein, Rebecca and Grundgeiger, Tobias and Happel, Oliver},
booktitle = {2019 Human Factors and Ergonomics Society Annual Meeting},
year = {2019},
volume = {63},
number = {1},
pages = {733-737},
url = {https://journals.sagepub.com/doi/pdf/10.1177/1071181319631032},
doi = {10.1177/1071181319631032}
}
Abstract: Simulation-based medical training is an increasingly used method to improve the technical and non-technical performance of clinical staff. An essential part of training is the debriefing of the participants, often using audio, video, or even eye tracking recordings. We conducted a practice-oriented feasibility study to test an eye tracking data preparation procedure, which automatically provided information about the gaze distribution on areas of interest such as the vital sign monitor or the patient simulator. We acquired eye tracking data during three simulation scenarios and provided gaze distribution data for debriefing within 30 minutes. Additionally, we qualitatively evaluated the usefulness of the generated eye tracking data for debriefings. Participating students and debriefers were mostly positive about the data provided; however, future research should improve the technical side of the procedure and investigate best practices regarding how to present and use the data in debriefings.