Human-Computer Interaction
Programming Course Interface Development Results of WS 2023/24
The winter semester has come to an end and we are happy to present the results of the Programming Course Interface Development.
Bavaria's new Minister of Digital Affairs at the XR Hub
Bavaria's new Minister of Digital Affairs Dr. Fabian Mehring visited the University of Würzburg for the first time. He was visibly impressed by the projects and achievements of the XR Hub.
Visit from the Universitätsrat and Kuratorium
The HCI Chair and PIIS working group had the pleasure of hosting the Universitätsrat and the Kuratorium at CAIDAS.
Studien-Info-Tag 2024
We welcomed pupils to the university-wide Studien-Info-Tag, where prospective students had the opportunity to join our lab tor entitled 'AI, Metaverse and User Experience: Understanding and Designing Digital Worlds in the Human-Computer Interaction degree program'.
Show more

Open Positions

Wissenschaftliche:r Mitarbeiter:in (m/w/d) für AIL AT WORK Projekt gesucht
Wir haben eine offene Stelle im wissenschaftlichen Dienst für das AIL AT WORK Projekt.


Recent Publications

Erik Wolf, Individual-, System-, and Application-Related Factors Influencing the Perception of Virtual Humans in Virtual Environments. 2024. Under Review
[BibSonomy]
@phdthesis{wolf2024thesis, author = {Erik Wolf}, year = {2024}, title = {Individual-, System-, and Application-Related Factors Influencing the Perception of Virtual Humans in Virtual Environments} }
Abstract: Mixed, augmented, and virtual reality, collectively known as extended reality (XR), allows users to immerse themselves in virtual environments and engage in experiences surpassing reality's boundaries. Virtual humans are ubiquitous in such virtual environments and can be utilized for myriad purposes, offering the potential to greatly impact daily life. Through the embodiment of virtual humans, XR offers the opportunity to influence how we see ourselves and others. In this function, virtual humans serve as a predefined stimulus whose perception is elementary for researchers, application designers, and developers to understand. This dissertation aims to investigate the influence of individual-, system-, and application-related factors on the perception of virtual humans in virtual environments, focusing on their potential use as stimuli in the domain of body perception. Individual-related factors encompass influences based on the user's characteristics, such as appearance, attitudes, and concerns. System-related factors relate to the technical properties of the system that implements the virtual environment, such as the level of immersion. Application-related factors refer to design choices and specific implementations of virtual humans within virtual environments, such as their rendering or animation style. This dissertation provides a contextual framework and reviews the relevant literature on factors influencing the perception of virtual humans. To address identified research gaps, it reports on five empirical studies analyzing quantitative and qualitative data from a total of 165 participants. The studies utilized a custom-developed XR system, enabling users to embody rapidly generated, photorealistically personalized virtual humans that can be realistically altered in body weight and observed using different immersive XR displays. The dissertation's findings showed, for example, that embodiment and personalization of virtual humans serve as self-related cues and moderate the perception of their body weight based on the user's body weight. They also revealed a display bias that significantly influences the perception of virtual humans, with disparities in body weight perception of up to nine percent between different immersive XR displays. Based on all findings, implications for application design were derived, including recommendations regarding reconstruction, animation, body weight modification, and body weight estimation methods for virtual humans, but also for the general user experience. By revealing influences on the perception of virtual humans, this dissertation contributes to understanding the intricate relationship between users and virtual humans. The findings and implications presented have the potential to enhance the design and development of virtual humans, leading to improved user experiences and broader applications beyond the domain of body perception.
Kristoffer Waldow, Lukas Decker, Martin Mišiak, Arnulph Fuhrmann, Daniel Roth, Marc Erich Latoschik, Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering, In Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24). IEEE, 2024. Best poster award 🏆
[BibSonomy]
@inproceedings{waldow2024investigating, author = {Kristoffer Waldow and Lukas Decker and Martin Mišiak and Arnulph Fuhrmann and Daniel Roth and Marc Erich Latoschik}, year = {2024}, booktitle = {Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)}, publisher = {IEEE}, title = {Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering} }
Abstract: Depth perception is essential for our daily experiences, aiding in orientation and interaction with our surroundings. Virtual Reality allows us to decouple such depth cues mainly represented through binocular disparity and motion parallax. Dealing with fully mesh-based rendering methods these cues are not problematic as they originate from the object’s underlying geometry. However, manipulating motion parallax, as seen in stereoscopic imposter-based rendering, raises multiple perceptual questions. Therefore, we conducted a user experiment to investigate how varying object sizes affect such visual errors and perceived 3-dimensionality, revealing an interestingly significant negative correlation and new assumptions about visual quality.
Philipp Krop, Martin J. Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich, The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI, In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), p. 9. New York, NY, USA: ACM, 2024. To be published.
[Download] [BibSonomy] [Doi]
@inproceedings{krop2024effects, author = {Philipp Krop and Martin J. Koch and Astrid Carolus and Marc Erich Latoschik and Carolin Wienrich}, url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-chi-framing-trust-ai-preprint.pdf}, year = {2024}, booktitle = {Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)}, publisher = {ACM}, address = {New York, NY, USA}, pages = {9}, title = {The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI} }
Abstract: Even though people imagine different embodiments when asked which AI they would like to work with, most studies investigate trust in AI systems without specific physical appearances. This study aims to close this gap by combining influencing factors of trust to analyze their impact on the perceived trustworthiness, warmth, and competence of an embodied AI. We recruited 68 par- ticipants who observed three co-working scenes with an embodied AI, presented as expert/novice (expertise), human/AI (humanness), or congruent/slightly incongruent to the environment (congruence). Our results show that the expertise condition had the largest im- pact on trust, acceptance, and perceived warmth and competence. When controlled for perceived competence, the humanness of the AI and the congruence of its embodiment to the environment also influence acceptance. The results show that besides expertise and the perceived competence of the AI, other design variables are rele- vant for successful human-AI interaction, especially when the AI is embodied.
David Mal, Nina Döllinger, Erik Wolf, Stephan Wenninger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik, Am I the Odd One? Exploring (In)Congruencies in the Realism of Avatars and Virtual Others in Virtual Reality, In arXiv. 2024. Preprint
[Download] [BibSonomy] [Doi]
@article{mal2024oddone, author = {David Mal and Nina Döllinger and Erik Wolf and Stephan Wenninger and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik}, journal = {arXiv}, url = {https://arxiv.org/pdf/2403.07122.pdf}, year = {2024}, title = {Am I the Odd One? Exploring (In)Congruencies in the Realism of Avatars and Virtual Others in Virtual Reality} }
Abstract: Virtual humans play a pivotal role in social virtual environments, shaping users' VR experiences. The diversity in available options and users' preferences can result in a heterogeneous mix of appearances among a group of virtual humans. The resulting variety in higher-order anthropomorphic and realistic cues introduces multiple (in)congruencies, eventually impacting the plausibility of the experience. In this work, we consider the impact of (in)congruencies in the realism of a group of virtual humans, including co-located others and one's self-avatar. In a 2 x 3 mixed design, participants embodied either (1) a personalized realistic or (2) a customized stylized self-avatar across three consecutive VR exposures in which they were accompanied by a group of virtual others being either (1) all realistic, (2) all stylized, or (3) mixed. Our results indicate groups of virtual others of higher realism, i.e., potentially more congruent with participants' real-world experiences and expectations, were considered more human-like, increasing the feeling of co-presence and the impression of interaction possibilities. (In)congruencies concerning the homogeneity of the group did not cause considerable effects. Furthermore, our results indicate that a self-avatar's congruence with the participant's real-world experiences concerning their own physical body yielded notable benefits for virtual body ownership and self-identification for realistic personalized avatars. Notably, the incongruence between a stylized self-avatar and a group of realistic virtual others resulted in diminished ratings of self-location and self-identification. We conclude on the implications of our findings and discuss our results within current theories of VR experiences, considering (in)congruent visual cues and their impact on the perception of virtual others, self-representation, and spatial presence.
David Mal, Erik Wolf, Nina Döllinger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik, From 2D-Screens to VR: Exploring the Effect of Immersion on the Plausibility of Virtual Humans, In CHI 24 Conference on Human Factors in Computing Systems Extended Abstracts, p. 8. 2024.
[Download] [BibSonomy]
@inproceedings{mal2024vhpvr, author = {David Mal and Erik Wolf and Nina Döllinger and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik}, url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-chi-vhp-in-vr-preprint.pdf}, year = {2024}, booktitle = {CHI 24 Conference on Human Factors in Computing Systems Extended Abstracts}, pages = {8}, title = {From 2D-Screens to VR: Exploring the Effect of Immersion on the Plausibility of Virtual Humans} }
Abstract: Virtual humans significantly contribute to users' plausible XR experiences. However, it may be not only the congruent rendering of the virtual human but also the intermediary display technology having a significant impact on virtual humans' plausibility. In a low-immersive desktop-based and a high-immersive VR condition, participants rated realistic and abstract animated virtual humans regarding plausibility, affective appraisal, and social judgments. First, our results confirmed the factor structure of a preliminary virtual human plausibility questionnaire in VR. Further, the appearance and behavior of realistic virtual humans were overall perceived as more plausible compared to abstract virtual humans, an effect that increased with high immersion. Moreover, only for high immersion, realistic virtual humans were rated as more trustworthy and sympathetic than abstract virtual humans. Interestingly, we observed a potential uncanny valley effect for low but not for high immersion. We discuss the impact of a natural perception of anthropomorphic and realistic cues in VR and highlight the potential of immersive technology to elicit distinct effects in virtual humans.
Christian Merz, Christopher Göttfert, Carolin Wienrich, Marc Erich Latoschik, Universal Access for Social XR Across Devices: The Impact of Immersion on the Experience in Asymmetric Virtual Collaboration, In Proceedings of the 31st IEEE Virtual Reality conference (VR '24). 2024.
[Download] [BibSonomy]
@inproceedings{merz2024universal, author = {Christian Merz and Christopher Göttfert and Carolin Wienrich and Marc Erich Latoschik}, url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-universal-access-social-xr.pdf}, year = {2024}, booktitle = {Proceedings of the 31st IEEE Virtual Reality conference (VR '24)}, title = {Universal Access for Social XR Across Devices: The Impact of Immersion on the Experience in Asymmetric Virtual Collaboration} }
Abstract: This article investigates the influence of input/output device characteristics and degrees of immersion on the User Experience (UX) of specific eXtended Reality (XR) effects, i.e., presence, self-perception, other-perception, and task perception. It targets universal access to social XR, where dedicated XR hardware is unavailable or can not be used, but participation is desirable or even necessary. We compare three different device configurations: (i) desktop screen with mouse, (ii) desktop screen with tracked controllers, and (iii) Head-Mounted Display (HMD) with tracked controllers. 87 participants took part in collaborative dyadic interaction (a sorting task) with asymmetric device configurations in a specifically developed social XR. In line with prior research, the sense of presence and embodiment were significantly lower for the desktop setups. However, we only found minor differences in task load and no differences in usability and enjoyment of the task between the conditions. Additionally, the perceived humanness and virtual human plausibility of the other were not affected, no matter the device used. Finally, there was no impact regarding co-presence and social presence independent of the level of immersion of oneself or the other. We conclude that the device in social XR is important for self-perception and presence. However, our results indicate that the devices do not affect important UX and usability aspects, specifically, the qualities of social interaction in collaborative scenarios, paving the way for universal access to social XR encounters and significantly promoting participation.
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik, Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities, In IEEE Transactions on Visualization and Computer Graphics, pp. 1-11. 2024.
[Download] [BibSonomy] [Doi]
@article{10460576, author = {Florian Kern and Jonathan Tschanter and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, url = {https://ieeexplore.ieee.org/document/10460576}, year = {2024}, pages = {1-11}, title = {Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities} }
Abstract: Text input is desirable across various eXtended Reality (XR) use cases and is particularly crucial for knowledge and office work. This article compares handwriting text input between Virtual Reality (VR) and Video See-Through Augmented Reality (VST AR), facilitated by physically aligned and mid-air surfaces when writing simple and complex sentences. In a 2x2x2 experimental design, 72 participants performed two ten-minute handwriting sessions, each including ten simple and ten complex sentences representing text input in real-world scenarios. Our developed handwriting application supports different XR displays, surface alignments, and handwriting recognition based on digital ink. We evaluated usability, user experience, task load, text input performance, and handwriting style. Our results indicate high usability with a successful transfer of handwriting skills to the virtual domain. XR displays and surface alignments did not impact text input speed and error rate. However, sentence complexities did, with participants achieving higher input speeds and fewer errors for simple sentences (17.85 WPM, 0.51% MSD ER) than complex sentences (15.07 WPM, 1.74% MSD ER). Handwriting on physically aligned surfaces showed higher learnability and lower physical demand, making them more suitable for prolonged handwriting sessions. Handwriting on mid-air surfaces yielded higher novelty and stimulation ratings, which might diminish with more experience. Surface alignments and sentence complexities significantly affected handwriting style, leading to enlarged and more connected cursive writing in both mid-air and for simple sentences. The study also demonstrated the benefits of using XR controllers in a pen-like posture to mimic styluses and pressure-sensitive tips on physical surfaces for input detection. We additionally provide a phrase set of simple and complex sentences as a basis for future text input studies, which can be expanded and adapted.
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik, Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference, In IEEE Transactions on Visualization and Computer Graphics, pp. 1-11. 2024.
[Download] [BibSonomy] [Doi]
@article{westermeier2024assessing, author = {Franziska Westermeier and Larissa Brübach and Carolin Wienrich and Marc Erich Latoschik}, journal = {IEEE Transactions on Visualization and Computer Graphics}, url = {https://ieeexplore.ieee.org/document/10458408}, year = {2024}, pages = {1-11}, title = {Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference} }
Abstract: Spatial User Interfaces along the Reality-Virtuality continuum heavily depend on accurate depth perception. However, current display technologies still exhibit shortcomings in the simulation of accurate depth cues, and these shortcomings also vary between Virtual or Augmented Reality (VR, AR: eXtended Reality (XR) for short). This article compares depth perception between VR and Video See-Through (VST) AR. We developed a digital twin of an existing office room where users had to perform five depth-dependent tasks in VR and VST AR. Thirty-two participants took part in a user study using a 1×4 within-subjects design. Our results reveal higher misjudgment rates in VST AR due to conflicting depth cues between virtual and physical content. Increased head movements observed in participants were interpreted as a compensatory response to these conflicting cues. Furthermore, a longer task completion time in the VST AR condition indicates a lower task performance in VST AR. Interestingly, while participants rated the VR condition as easier and contrary to the increased misjudgments and lower performance with the VST AR display, a majority still expressed a preference for the VST AR experience. We discuss and explain these findings with the high visual dominance and referential power of the physical content in the VST AR condition, leading to a higher spatial presence and plausibility.
See all publications here
Legal Information