

Prof. Latoschik gave several interviews about the new large language model DeepSeek R1!

This year's winter expo is on the 07th of February 2025. Feel free to join us and experience a lot of interesting projects.

alpha Uni accompanied two of our Games Engineering students for a few days in a short documentary!

There is now another PhD in the ranks of the PIIS and HCI Group.
Recent Publications
Self-Similarity Beats Agency in Augmented Reality Body Weight Perception, In IEEE Transactions on Visualization and Computer Graphics (TVCG), IEEE VR 25 special issue.
2025. To be published
[BibSonomy]
,
[BibSonomy]
@article{fiedler2025selfsimilarity,
author = {Marie Luisa Fiedler and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG), IEEE VR 25 special issue},
year = {2025},
title = {Self-Similarity Beats Agency in Augmented Reality Body Weight Perception}
}
Abstract:
This paper investigates if and how self-similarity and having agency impact sense of embodiment, self-identification, and body weight estimation in Augmented Reality (AR). We conducted a 2x2 mixed design experiment involving 60 participants who interacted with either synchronously moving virtual humans or independently moving ones, each with self-similar or generic appearances, across two consecutive AR sessions. Participants evaluated their sense of embodiment, self-identification, and body weight perception of the virtual human. Our results show that self-similarity significantly enhanced sense of embodiment, self-identification, and the accuracy of body weight estimates with the virtual human. However, the effects of having agency over virtual human movements were notably weaker in these measures than in similar VR studies. Further analysis indicated that not only the virtual human itself but also the participants' body weight, self-esteem, and body shape concerns predict body weight estimates across all conditions. Our work advances the understanding of virtual human body weight perception in AR systems, emphasizing the importance of factors such as coherence with the real-world environment.
Interpupillary to Inter-Camera Distance of Video See-Through AR and its Impact on Depth Perception, In Proceedings of the 32nd IEEE Virtual Reality conference (VR '25).
2025. to be published
[BibSonomy]
,
[BibSonomy]
@inproceedings{westermeier2025interpupillary,
author = {Franziska Westermeier and Chandni Murmu and Kristopher Kohm and Christopher Pagano and Carolin Wienrich and Sabarish V. Babu and Marc Erich Latoschik},
year = {2025},
booktitle = {Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)},
title = {Interpupillary to Inter-Camera Distance of Video See-Through AR and its Impact on Depth Perception}
}
Abstract:
Interpupillary distance (IPD) is a crucial characteristic of head-mounted displays (HMDs) because it defines an important property for generating a stereoscopic parallax, which is essential for correct depth perception. This is why contemporary HMDs offer adjustable lenses to adapt to users' individual IPDs.
However, today's Video See-Through Augmented Reality (VST AR) HMDs use fixed camera placements to reconstruct the stereoscopic view of a user's environment.
This leads to a potential mismatch between individual IPD settings and the fixed Inter-Camera Distances (ICD), which in turn can lead to perceptual incongruencies, limiting the usability and potentially the applicability of VST AR in depth-sensitive use cases. To investigate this incongruency between IPD and ICD, we conducted a 2x3 mixed-factor design empirical evaluation using a near-field, open-loop reaching task comparing distance judgments of Virtual Reality (VR) and VST AR. We also explored improvements in reaching performance via perceptual calibration by incorporating a feedback phase between pre- and post-phase conditions, with a particular focus on the influence of IPD-ICD differences. Our Linear Mixed Model (LMM) analysis showed a significant difference between VR and VST AR, a significant effect of IPD-ICD mismatch, as well as a combined effect of both factors. This novel insight and its consequences are discussed specifically for depth perception tasks in AR, eXtended Reality (XR), and potential use cases.
My Co-worker ChatGPT: Development of an XR Application for Embodied Artificial Intelligence in Work Environments, In 2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW).
IEEE Computer Science,
2025. To be published.
[BibSonomy]
,
[BibSonomy]
@inproceedings{krop2025coworker,
author = {Philipp Krop and David Obremski and Astrid Carolus and Marc Erich Latoschik and Carolin Wienrich},
year = {2025},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
publisher = {IEEE Computer Science},
title = {My Co-worker ChatGPT: Development of an XR Application for Embodied Artificial Intelligence in Work Environments}
}
Abstract:
With recent developments in spatial computing, work contexts might shift to augmented reality. Embodied AI - virtual conversa- tional agents backed by AI systems - have the potential to enhance these contexts and open up more communication channels than just text. To support knowledge transfer from virtual agent research to the general populace, we developed My CoWorker ChatGPT - an interactive demo where employees can try out various embodied AIs in a virtual office or their own using augmented reality. We use state-of-the-art speech synthesis and body-scanning technology to create believable and trustworthy AI assistants. The demo was shown at multiple events throughout Germany, where it was well received and sparked fruitful conversations about the possibilities of embodied AI in work contexts.
When Fear Overshadows Perceived Plausibility: The Influence of Incongruencies on Acrophobia in VR, In Proceedings of the 32nd IEEE Virtual Reality conference (VR '25).
IEEE Computer Science,
2025. Accepted for publication and presentation at the 2025 IEEE VR.
[Download] [BibSonomy]
,
[Download] [BibSonomy]
@proceedings{brubach2025overshadows,
author = {Larissa Brübach and Deniz Celikhan and Lennard Rüffert and Franziska Westermeier and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-bruebach-height-and-plausibility-preprint.pdf},
year = {2025},
booktitle = {Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)},
publisher = {IEEE Computer Science},
title = {When Fear Overshadows Perceived Plausibility: The Influence of Incongruencies on Acrophobia in VR}
}
Abstract:
Virtual Reality Exposure Therapy (VRET) has become an effective, customizable, and affordable treatment for various psychological and physiological disorders. Specifically, it is used to treat specific anxiety disorders, such as acrophobia or arachnophobia, for decades. However, to ensure a positive outcome for patients, we must understand and control the effects potentially caused by the technology and medium of Virtual Reality (VR) itself. This article specifically investigates the impact of the Plausibility illusion (Psi), as one of the two theorized presence components, on the fear of heights. In two experiments, 30 participants each experienced two different heights with congruent and incongruent object behaviors in a 2 x 2 within-subject design. Results show that the strength of the congruence manipulation plays a significant role. Only when incongruencies are strong enough will they be recognized by users, specifically in high fear conditions, as triggered by exposure to increased heights. If incongruencies are too subtle, they seem to be overshadowed by the stronger fear reactions. Our evidence contributes to recent theories of VR effects and emphasizes the importance of understanding and controlling factors potentially assumed to be incidental, specifically during VRET designs. Incongruencies should be controlled so that they do not have an unwanted influence on the patient's fear response.
Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness, In IEEE Transactions on Visualization and Computer Graphics (TVCG).
2025. Accepted for presentation at IEEE VR 2025 and for publication in IEEE TVCG special issue
[BibSonomy]
,
[BibSonomy]
@article{kullmann2025coverage,
author = {Peter Kullmann and Theresa Schell and Timo Menzel and Mario Botsch and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2025},
title = {Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness}
}
Abstract:
Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.
NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback, In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 19.
2025. Conditionally accepted for publication
[Download] [BibSonomy]
,
[Download] [BibSonomy]
@article{hinterreiter2025newsunfold,
author = {Smi Hinterreiter and Martin Wessel and Fabian Schliski and Isao Echizen and Marc Erich Latoschik and Timo Spinde},
journal = {Proceedings of the International AAAI Conference on Web and Social Media},
url = {https://arxiv.org/abs/2407.17045},
year = {2025},
volume = {19},
title = {NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback}
}
Abstract:
Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.
Binded to the Lights – Storytelling with a Physically Embodied and a Virtual Robot using Emotionally Adapted Lights, In 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), pp. 2117-2124.
2024.
[BibSonomy] [Doi]
,
[BibSonomy] [Doi]
@proceedings{10731419,
author = {Sophia C. Steinhaeusser and Elisabeth Ganal and Murat Yalcin and Marc Erich Latoschik and Birgit Lugrin},
year = {2024},
booktitle = {2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN)},
pages = {2117-2124},
doi = {10.1109/RO-MAN60168.2024.10731419},
title = {Binded to the Lights – Storytelling with a Physically Embodied and a Virtual Robot using Emotionally Adapted Lights}
}
Abstract:
Virtual environments (VEs) can be designed to evoke specific emotions for example by using colored light, not only applicable for games but also for virtual storytelling with a single storyteller. Social robots are perfectly suited as storytellers due to their multimodality. However, there is no research yet on the transferability of robotic storytelling to virtual reality (VR). In addition, the transfer of concepts from VE design such as adaptive room illumination to robotic storytelling has yet not been tested. Thus, we conducted a study comparing the same robotic storytelling with a physically embodied robotic storyteller and in VR to investigate the transferability of robotic storytelling to VR. As a second factor, we manipulated the room light following design guidelines for VEs or kept it constant. Results show that a virtual robotic storyteller is not perceived worse than a physically embodied storyteller, suggesting the applicability of virtual static robotic storytellers. Regarding emotion-driven lighting, no significant effect of colored lights on self-reported emotions was found, but adding colored light increased the social presence of the robot and its’ perceived competence in both VR and reality. As our study was limited by a static robotic storyteller not using bodily expressiveness future work is needed to investigate the interaction between well-researched robot modalities and the rather new modality of colored light based on our results.
Anti-aliasing Techniques in Virtual Reality: A User Study with Perceptual Pairwise Comparison Ranking Scheme, In GI VR/AR Workshop, pp. 10--18420.
2024.
[BibSonomy]
,
[BibSonomy]
@inproceedings{waldow2024anti,
author = {Kristoffer Waldow and Jonas Scholz and Martin Misiak and Arnulph Fuhrmann and Daniel Roth and Marc Erich Latoschik},
year = {2024},
booktitle = {GI VR/AR Workshop},
pages = {10--18420},
title = {Anti-aliasing Techniques in Virtual Reality: A User Study with Perceptual Pairwise Comparison Ranking Scheme}
}
Abstract: