Human-Computer Interaction
Winter EXPO 2026 Recap
The Winter EXPO 2026 was a great success! With different demos and projects.
Winter Expo 2026 Invitation!
We invite you to this year's Winter EXPO on the 6th of February!
Dr. Franziska Westermeier Successful Dissertation
Effects of Incongruencies Across the Reality-Virtuality Continuum
Dr. Martin Mišiak Successful Dissertation
Realistic VR Rendering: Approximations, Optimizations and their Impact on Perception
Dr. Christian Rack Successful Dissertation
Show Me How You Move and I Tell You Who You Are - Motion-Based User Identification and Verification for the Metaverse
Show more

Recent Publications

Murat Yalcin, Marc Erich Latoschik, End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach, In Frontiers in Physiology. 2026. To be published
[Download] [BibSonomy]
@article{yalcin2026endtoend, author = {Murat Yalcin and Marc Erich Latoschik}, journal = {Frontiers in Physiology}, url = {https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2026.1694995/abstract}, year = {2026}, title = {End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach} }
Abstract: Electrocardiogram (ECG) signals are frequently utilized for detecting important cardiac events, such as variations in ECG intervals, as well as for monitoring essential physiological metrics, including heart rate (HR) and heart rate variability (HRV). However, the accurate measurement of ECG traditionally requires a clinical environment, thereby limiting its feasibility for continuous, everyday monitoring. In contrast, Photoplethysmography (PPG) offers a non-invasive, cost-effective optical method for capturing cardiac data in daily settings and is increasingly utilized in various clinical and commercial wearable devices. However, PPG measurements are significantly less detailed than those of ECG. In this study, we propose a novel approach to synthesize ECG signals from PPG signals, facilitating the generation of robust ECG waveforms using a simple, unobtrusive wearable setup. Our approach utilizes a Transformer-based Generative Adversarial Network model, designed to accurately capture ECG signal patterns and enhance generalization capabilities. Additionally, we incorporate self-supervised learning techniques to enable the model to learn diverse ECG patterns through specific tasks. Model performance is evaluated using various metrics, including heart rate calculation and root minimum squared error (RMSE) on two different datasets. The comprehensive performance analysis demonstrates that our model exhibits superior efficacy in generating accurate ECG signals (with reducing 83.9\% and 72.4\% of the heart rate calculation error on MIMIC III and Who is Alyx? datasets, respectively), suggesting its potential application in the healthcare domain to enhance heart rate prediction and overall cardiac monitoring. As an empirical proof of concept, we also present an Atrial Fibrillation (AF) detection task, showcasing the practical utility of the generated ECG signals for cardiac diagnostic applications. To encourage replicability and reuse in future ECG generation studies, we have shared the dataset and will also make the code as publicly available.
Franziska Westermeier, Effects of Incongruencies Across the Reality-Virtuality Continuum. 2026.
[Download] [BibSonomy] [Doi]
@phdthesis{westermeier2026effects, author = {Franziska Westermeier}, url = {https://doi.org/10.25972/OPUS-43370}, year = {2026}, doi = {10.25972/OPUS-43370}, title = {Effects of Incongruencies Across the Reality-Virtuality Continuum} }
Abstract: This dissertation examines the perceptual and cognitive effects of incongruencies in eXtended Reality (XR) experiences along the Reality-Virtuality (RV) continuum, with a particular focus on Virtual Reality (VR) and Video See-Through (VST) Augmented Reality (AR). VST AR integrates video images from front-facing cameras on the Head-Mounted Display (HMD) with virtual content. XR HMDs, that are capable of VST AR, oftentimes also include a VR mode. While VR has been extensively studied, VST AR remains underexplored despite rapid advances in camera resolution and rendering techniques. The blending of virtual and real-world elements in VST AR frequently gives rise to perceptual mismatches, such as conflicting depth cues, misaligned virtual objects, and latency discrepancies, that challenge established XR frameworks and may adversely affect user experience. This dissertation, incorporating five key publications and five empirical experiments, investigates effects of incongruencies in VR and VST AR, by examining both subjective reports and objective behavioral measures. While users may not always consciously detect these mismatches, the empirical findings of this dissertation reveal their significant impact on depth perception, spatial judgments, and performance. A central focus of this work is the application and refinement of the Congruence and Plausibility (CaP) model, which describes how incongruencies operate at different processing levels — from low-level sensory distortions to higher-order cognitive inconsistencies. The results indicate that AR-inherent perceptual incongruencies influence the experience at a subconscious level, challenging existing theoretical frameworks that primarily focused on VR experiences that are visually coherent. To further support this understanding, the dissertation introduces a methodological framework for analyzing and predicting the effects of incongruencies, contributing to the development of coherent and immersive XR applications. The conducted research affirms both the complexity and promise of VST AR technologies. By disclosing how subconscious factors interact with users’ conscious perceptions, this dissertation enriches theoretical understanding and provides strategies for advancing XR research.
Marie Luisa Fiedler, Christian Merz, Jonathan Tschanter, Carolin Wienrich, Marc Erich Latoschik, Technological Advances in Two Generations of Consumer-Grade VR Systems: Effects on User Experience and Task Performance. 2026.
[Download] [BibSonomy]
@misc{fiedler2026technologicaladvances, author = {Marie Luisa Fiedler and Christian Merz and Jonathan Tschanter and Carolin Wienrich and Marc Erich Latoschik}, url = {https://arxiv.org/abs/2601.09610}, year = {2026}, title = {Technological Advances in Two Generations of Consumer-Grade VR Systems: Effects on User Experience and Task Performance} }
Abstract:
Marie Luisa Fiedler, Christian Merz, Lukas Schach, Jonathan Tschanter, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik, Am I Still Me? Visual Congruence Across Reality–Virtuality and Avatar Appearance in Shaping Self-Perception and Behavior, In IEEE Transactions on Visualization and Computer Graphics. 2026. To be published.
[BibSonomy]
@article{fiedler2026still, author = {Marie Luisa Fiedler and Christian Merz and Lukas Schach and Jonathan Tschanter and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2026}, title = {Am I Still Me? Visual Congruence Across Reality–Virtuality and Avatar Appearance in Shaping Self-Perception and Behavior} }
Abstract: This paper presents the first systematic investigation of how congruence in visual self-representation influences self-perception and behavior. We span a continuum from the physical self through avatars with graded self-similarity to clearly dissimilar avatars in virtual reality (VR). In a 1x4 within-user study, participants completed movement and quiz tasks in either physical reality or a digital twin environment in VR, where they embodied one of three avatars: a photorealistic self-similar avatar, a dissimilar same-gender avatar, or a dissimilar opposite-gender avatar. Subjective measures included presence, sense of embodiment, self-identification, and perceived change, and were complemented by an objective movement metric of behavioral change. Compared to physical reality, VR, even with a self-similar avatar, produced lower presence, a weaker sense of embodiment, and reduced self-identification, revealing a persistent gap in visual congruence. Within VR, self-similar avatars enhanced body ownership, self-location, and self-identification relative to dissimilar avatars. Conversely, dissimilar avatars produced measurable behavioral changes compared with self-similar ones. Gender cues, however, had little impact in gender-neutral tasks. Overall, the findings show that photorealistic self-similar avatars reinforce embodiment and self-identification. However, VR still falls short of achieving congruence with physical reality, underscoring key challenges for avatar realism and ecological validity.
Jonathan Tschanter, Christian Merz, Marie Luisa Fiedler, Carolin Wienrich, Marc Erich Latoschik, Use Case Matters: Comparing the User Experience and Task Performance Across Tasks for Embodied Interaction in VR, In IEEE Transactions on Visualization and Computer Graphics. 2026. To be published
[BibSonomy]
@article{tschanter2026matters, author = {Jonathan Tschanter and Christian Merz and Marie Luisa Fiedler and Carolin Wienrich and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2026}, title = {Use Case Matters: Comparing the User Experience and Task Performance Across Tasks for Embodied Interaction in VR} }
Abstract: Integrated Virtual Reality (IVR) systems are central to avatar-mediated use cases in Virtual Reality (VR), reconstructing users' movements on avatars. They differ primarily in their tracking architectures, which determine how completely and accurately users' movements are captured and reconstructed on avatars. Many current IVR systems reduce user-worn hardware, trading reconstruction accuracy against cost and setup complexity, yet their impact on user experience and task performance across use cases remains underexplored. We compared three reduced user-worn IVR systems. Each system has distinct technical approaches: (1) Captury (markerless outside-in optical tracking), (2) Meta Movement SDK (markerless inside-out optical tracking), and (3) Vive Trackers (marker-based outside-in optical tracking with IMUs). In a 3x5 mixed-design, participants performed five tasks, simulating different use cases, to probe distinct aspects of these systems. No system consistently outperformed the others. Meta excelled in hand-based, fast-paced interactions, while Captury and Vive performed better in lower-body tasks and during full-body pose observation. These findings underscore the need to evaluate reduced user-worn IVR systems within the specific use case. We offer practical guidance for system selection based on use-case demands and released our tasks as an open-source, extensible framework to support future evaluations for selecting IVR systems.
Jonathan Tschanter, Christian Merz, Carolin Wienrich, Marc Erich Latoschik, How Harassment Shapes Self-Perception and Well-Being in Social VR: Evidence from a Controlled Lab Study, In IEEE Transactions on Visualization and Computer Graphics. 2026. To be published
[BibSonomy]
@article{tschanter2026harassment, author = {Jonathan Tschanter and Christian Merz and Carolin Wienrich and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2026}, title = {How Harassment Shapes Self-Perception and Well-Being in Social VR: Evidence from a Controlled Lab Study} }
Abstract: Social Virtual Reality (SVR) allows users to meet and build relationships through embodied avatars and real-time interaction in virtual spaces. While embodiment can strengthen social connections and presence, it can also intensify negative encounters, making SVR particularly vulnerable to harassment. Despite frequent reports of verbal, visual, and "physical" violations in SVR, little is known about how harassment reshapes users' self-perception, including their sense of embodiment, self-identification, closeness, and avatar customization preferences. We conducted a controlled experiment with 52 participants who experienced either a neutral or a harassment condition in a scenario modeled after real SVR incidents. Participants perceived the harassing peer as significantly more negative, annoying, and disturbing than the neutral peer. Contrary to prior reports, harassment did not significantly affect well-being measures, including emotional state, self-esteem, and physiological arousal, within this controlled scenario. However, participants reported stronger bodily change, attributed more of their own attitudes and emotions to their avatars, and increased interpersonal distance when personal space was invaded. Self-reported coping strategies included ignoring, stepping back, using humor, and retaliating. Notably, avatar customization preferences shifted across conditions. Participants in the neutral condition favored personalized avatars, whereas those in the harassment condition more frequently preferred anonymity in public spaces. Together, these findings demonstrate that harassment in SVR not only exploits embodiment but also reshapes self-perception. We further contribute methodological insights into how harassment can be ethically and reproducibly studied in controlled SVR-like experiments.
David Obremski, Paula Friedrich, Carolin Wienrich, To be Healed or Hacked? - User‑Centered Ethical Design for Embodied AI in Mental Health Care, In IEEE Transactions on Visualization and Computer Graphics. 2026. To be published
[BibSonomy]
@article{obremski2026healed, author = {David Obremski and Paula Friedrich and Carolin Wienrich}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2026}, title = {To be Healed or Hacked? - User‑Centered Ethical Design for Embodied AI in Mental Health Care} }
Abstract: The global prevalence of mental health disorders has created a substantial treatment gap. To support clinicians and increase access to care, researchers in the field of Artificial Intelligence (AI) and Virtual Reality (VR) have investigated technology-mediated psychotherapy for years. However, research about stakeholders' concerns and their readiness to use AI in psychotherapy remains scarce. This study focuses on a user-centered approach to accommodate patients' concerns and, based on the results, implement measures to foster self-disclosure and trust towards an embodied AI therapist in VR. First, we conducted an online study with mental health patients ($N = 152$), which identified data autonomy and transparency as their primary ethical concerns. In a subsequent in-person VR study ($N = 90$) we compared effects of increased data autonomy and transparency on self-disclosure and trust towards an embodied AI therapist. Results indicated that higher data autonomy led to greater self-disclosure, while transparency had no significant effect. Manipulating data autonomy and transparency did not affect perceived trust, though exploratory calculations revealed that women reported significantly higher trust levels than men. These findings illuminate patients' priorities and provide implications for technical designs for AI-driven mental health care.
Florian Kern, Lukas Polifke, Paula Friedrich, Marc Erich Latoschik, Carolin Wienrich, David Obremski, CECA - A Configurable Framework for Embodied Conversational AI Agents in Extended Reality, In 2026 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). 2026. To be published
[BibSonomy]
@inproceedings{kern2026configurable, author = {Florian Kern and Lukas Polifke and Paula Friedrich and Marc Erich Latoschik and Carolin Wienrich and David Obremski}, year = {2026}, booktitle = {2026 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)}, title = {CECA - A Configurable Framework for Embodied Conversational AI Agents in Extended Reality} }
Abstract: We present CECA, a configurable framework for embodied conversational AI agents in Unity-based extended reality (XR) applications. CECA employs a client–server architecture to decouple agent logic from game engine–based embodiment. Built on LiveKit Agents, our approach integrates speech-to-text (STT), large language models (LLMs), and text-to-speech (TTS) into a unified, streaming voice-to-voice pipeline configured via metadata rather than code changes. We outline how this architecture flexibly integrates local and cloud AI providers while mitigating limited provider SDK support in Unity. Finally, we highlight opportunities for future work, including multi-agent scenarios, higher-level templates for XR research, and systematic user studies.
See all publications here
Legal Information