Human-Computer Interaction
Dr. Franziska Westermeier Successful Dissertation
Effects of Incongruencies Across the Reality-Virtuality Continuum
Dr. Martin Mišiak Successful Dissertation
Realistic VR Rendering: Approximations, Optimizations and their Impact on Perception
Dr. Christian Rack Successful Dissertation
Show Me How You Move and I Tell You Who You Are - Motion-Based User Identification and Verification for the Metaverse
CAIDAS Scientific Opening
Reflecting on the CAIDAS Scientific Opening: A Landmark AI Conference at the University of Würzburg
Dr. André Markus Successful Dissertation
Loyal Game Changers: The Impact of AI-Driven Companions on Emotional, Cognitive, and Social Game Experiences and Practical Design Strategies.
Show more

Recent Publications

Franziska Westermeier, Effects of Incongruencies Across the Reality-Virtuality Continuum. 2026.
[Download] [BibSonomy] [Doi]
@phdthesis{westermeier2026effects, author = {Franziska Westermeier}, url = {https://doi.org/10.25972/OPUS-43370}, year = {2026}, doi = {10.25972/OPUS-43370}, title = {Effects of Incongruencies Across the Reality-Virtuality Continuum} }
Abstract: This dissertation examines the perceptual and cognitive effects of incongruencies in eXtended Reality (XR) experiences along the Reality-Virtuality (RV) continuum, with a particular focus on Virtual Reality (VR) and Video See-Through (VST) Augmented Reality (AR). VST AR integrates video images from front-facing cameras on the Head-Mounted Display (HMD) with virtual content. XR HMDs, that are capable of VST AR, oftentimes also include a VR mode. While VR has been extensively studied, VST AR remains underexplored despite rapid advances in camera resolution and rendering techniques. The blending of virtual and real-world elements in VST AR frequently gives rise to perceptual mismatches, such as conflicting depth cues, misaligned virtual objects, and latency discrepancies, that challenge established XR frameworks and may adversely affect user experience. This dissertation, incorporating five key publications and five empirical experiments, investigates effects of incongruencies in VR and VST AR, by examining both subjective reports and objective behavioral measures. While users may not always consciously detect these mismatches, the empirical findings of this dissertation reveal their significant impact on depth perception, spatial judgments, and performance. A central focus of this work is the application and refinement of the Congruence and Plausibility (CaP) model, which describes how incongruencies operate at different processing levels — from low-level sensory distortions to higher-order cognitive inconsistencies. The results indicate that AR-inherent perceptual incongruencies influence the experience at a subconscious level, challenging existing theoretical frameworks that primarily focused on VR experiences that are visually coherent. To further support this understanding, the dissertation introduces a methodological framework for analyzing and predicting the effects of incongruencies, contributing to the development of coherent and immersive XR applications. The conducted research affirms both the complexity and promise of VST AR technologies. By disclosing how subconscious factors interact with users’ conscious perceptions, this dissertation enriches theoretical understanding and provides strategies for advancing XR research.
Murat Yalcin, Marc Erich Latoschik, End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach, In Frontiers in Physiology. 2026. To be published
[Download] [BibSonomy]
@article{yalcin2026endtoend, author = {Murat Yalcin and Marc Erich Latoschik}, journal = {Frontiers in Physiology}, url = {https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2026.1694995/abstract}, year = {2026}, title = {End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach} }
Abstract: Electrocardiogram (ECG) signals are frequently utilized for detecting important cardiac events, such as variations in ECG intervals, as well as for monitoring essential physiological metrics, including heart rate (HR) and heart rate variability (HRV). However, the accurate measurement of ECG traditionally requires a clinical environment, thereby limiting its feasibility for continuous, everyday monitoring. In contrast, Photoplethysmography (PPG) offers a non-invasive, cost-effective optical method for capturing cardiac data in daily settings and is increasingly utilized in various clinical and commercial wearable devices. However, PPG measurements are significantly less detailed than those of ECG. In this study, we propose a novel approach to synthesize ECG signals from PPG signals, facilitating the generation of robust ECG waveforms using a simple, unobtrusive wearable setup. Our approach utilizes a Transformer-based Generative Adversarial Network model, designed to accurately capture ECG signal patterns and enhance generalization capabilities. Additionally, we incorporate self-supervised learning techniques to enable the model to learn diverse ECG patterns through specific tasks. Model performance is evaluated using various metrics, including heart rate calculation and root minimum squared error (RMSE) on two different datasets. The comprehensive performance analysis demonstrates that our model exhibits superior efficacy in generating accurate ECG signals (with reducing 83.9\% and 72.4\% of the heart rate calculation error on MIMIC III and Who is Alyx? datasets, respectively), suggesting its potential application in the healthcare domain to enhance heart rate prediction and overall cardiac monitoring. As an empirical proof of concept, we also present an Atrial Fibrillation (AF) detection task, showcasing the practical utility of the generated ECG signals for cardiac diagnostic applications. To encourage replicability and reuse in future ECG generation studies, we have shared the dataset and will also make the code as publicly available.
David Obremski, Paula Friedrich, Carolin Wienrich, Please Let Me Think: The Influence of Conversational Fillers on Transparency and Perception of Waiting Time when Interacting with a Conversational AI in Virtual Reality, In Proceedings of the 27th International Conference on Multimodal Interaction, pp. 496--505. 2025.
[Download] [BibSonomy]
@inproceedings{obremski2025please, author = {David Obremski and Paula Friedrich and Carolin Wienrich}, url = {http://dblp.uni-trier.de/db/conf/icmi/icmi2025.html#ObremskiFW25}, year = {2025}, booktitle = {Proceedings of the 27th International Conference on Multimodal Interaction}, pages = {496--505}, title = {Please Let Me Think: The Influence of Conversational Fillers on Transparency and Perception of Waiting Time when Interacting with a Conversational AI in Virtual Reality} }
Abstract:
Florian Kern, Using Controller Styluses for Virtual Keyboards and Handwriting Text Input in XR. Universität Würzburg, 2025.
[Download] [BibSonomy] [Doi]
@phdthesis{https://doi.org/10.25972/opus-42563, author = {Florian Kern}, url = {https://opus.bibliothek.uni-wuerzburg.de/42563}, year = {2025}, publisher = {Universität Würzburg}, doi = {10.25972/OPUS-42563}, title = {Using Controller Styluses for Virtual Keyboards and Handwriting Text Input in XR} }
Abstract: This dissertation investigates the feasibility and applicability of repurposing consumer-grade XR controllers as controller styluses and evaluates their impact on the performance and user experience of virtual tap and swipe keyboards and handwriting text input in XR environments. Text input is a core feature of many XR applications, enabling tasks such as documenting, note-taking, chatting, and web browsing. However, XR, encompassing VR, AR, and MR, presents distinct challenges that limit traditional text input methods like physical keyboards or handwriting with pen and paper. As an alternative, prior research explored virtual keyboards and handwriting text input in VR and OST AR, utilizing XR controllers held in the conventional power grip or hand tracking. Yet, fundamental research gaps remained. These include the feasibility and applicability of repurposing consumer-grade XR controllers as controller styluses by holding them in a pen-like posture, such as the precision grip, integrating diverse XR devices and input modalities, comparing the performance and user experience of text input methods in VR and VST AR, and understanding the impact of mid-air and physically aligned virtual surfaces. To address these gaps, this dissertation introduces the OTSS, a modular and extensible framework for repurposing consumer-grade XR controllers as controller styluses equipped with self-made or 3D-printed stylus accessories. OTSS also incorporates virtual-to-physical alignment and refinement techniques to align virtual surfaces to physical counterparts or freely place them in mid-air. Additionally, this dissertation presents the RSIO framework, an intermediate layer designed to simplify and unify cross-device and cross-platform XR application development. A series of user studies and technical evaluations demonstrate the applicability and versatility of the OTSS and RSIO frameworks. Building on these frameworks, two user studies involving a total of 136 participants provide detailed insights into the performance and user experience of virtual tap and swipe keyboards and handwriting text input in VR and VST AR. The findings underscore the potential of controller styluses for precise touch-based interaction on mid-air and physically aligned virtual surfaces, particularly when equipped with pressure-sensitive stylus tips for physical contact detection. Moreover, the results indicate that visual incongruencies are a distinct challenge in VST AR and suggest that while physical surfaces are desirable for text input in XR, they are not indispensable in mobile XR scenarios. Publicly available reference implementations are provided to establish a foundation for future research and the development of XR text input methods for professional, educational, and personal environments.
Chris Zimmerer, Multimodal Interaction in Virtual and Extended Reality. 2025.
[BibSonomy] [Doi]
@phdthesis{Zimmerer2025, author = {Chris Zimmerer}, year = {2025}, doi = {10.25972/OPUS-42565}, title = {Multimodal Interaction in Virtual and Extended Reality} }
Abstract:
Lukas Schach, Christian Rack, Ryan P. McMahan, Marc Erich Latoschik, Motion-Based User Identification across XR and Metaverse Applications by Deep Classification and Similarity Learning. 2025.
[Download] [BibSonomy]
@misc{schach2025motionbaseduseridentificationxr, author = {Lukas Schach and Christian Rack and Ryan P. McMahan and Marc Erich Latoschik}, url = {https://arxiv.org/abs/2509.08539}, year = {2025}, title = {Motion-Based User Identification across XR and Metaverse Applications by Deep Classification and Similarity Learning} }
Abstract:
Christian Merz, Lukas Schach, Marie Luisa Fiedler, Jean-Luc Lugrin, Carolin Wienrich, Marc Erich Latoschik, Unobtrusive In-Situ Measurement of Behavior Change by Deep Metric Similarity Learning of Motion Patterns. 2025.
[Download] [BibSonomy]
@misc{merz2025unobtrusiveinsitumeasurementbehavior, author = {Christian Merz and Lukas Schach and Marie Luisa Fiedler and Jean-Luc Lugrin and Carolin Wienrich and Marc Erich Latoschik}, url = {https://arxiv.org/abs/2509.04174}, year = {2025}, title = {Unobtrusive In-Situ Measurement of Behavior Change by Deep Metric Similarity Learning of Motion Patterns} }
Abstract:
Samantha Monty, Dennis Alexander Mevißen, Marc Erich Latoschik, Improving Mid-Air Sketching in Room-Scale Virtual Reality with Dynamic Color-to-Depth and Opacity Cues, In IEEE Transactions on Visualization and Computer Graphics. 2025. To be published.
[BibSonomy]
@article{monty2025improving, author = {Samantha Monty and Dennis Alexander Mevißen and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2025}, title = {Improving Mid-Air Sketching in Room-Scale Virtual Reality with Dynamic Color-to-Depth and Opacity Cues} }
Abstract: Immersive 3D mid-air sketching systems liberate users from the confines of traditional 2D sketching canvases. However, complications from perceptual challenges in Virtual Reality (VR), combined with the ergonomic and cognitive challenges of sketching in all three dimensions in mid-air lower the accuracy and aesthetic quality of 3D sketches. This paper explores how color-to-depth and opacity cues support users to create and perceive freehand, 3D strokes in room-scale sketching, unlocking a full 360° of freedom for creation. We implemented three graphic depth shader cues modifying the (1) alpha, (2) hue, and (3) value levels of a single color to dynamically adjust the color and transparency of meshes relative to their depth from the user. We investigated how these depth cues influence sketch efficiency, sketch quality, and total sketch experience with 24 participants in a comparative, counterbalanced, 4 x 1 within-subjects user study. First, with our graphic depth shader cues we could successfully transfer results of prior research in seated sketching tasks to room-scale scenarios. Our color-to-depth cues improved the similarity of sketches to target models. This highlights the usefulness of the color-to-depth approach even for the increased range of motion and depth in room-scale sketching. Second, our shaders assisted participants to complete tasks faster, spend a greater percentage of task time sketching, reduced the feeling of mental tiredness and improved the feeling of sketch efficiency in room-scale sketching. We discuss these findings and share our insights and conclusions to advance the research on improving spatial cognition in immersive sketching systems.
See all publications here
Legal Information