2025
Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
31
(
5)
, pp. 3613-3622
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kullmann2025coverage,
title = {Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness},
author = {Kullmann, Peter and Schell, Theresa and Menzel, Timo and Botsch, Mario and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
volume = {31},
number = {5},
pages = {3613-3622},
url = {https://ieeexplore.ieee.org/document/10919002},
doi = {10.1109/TVCG.2025.3549887}
}
Abstract: Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.
Peter Kullmann, Theresa Schell, Mario Botsch, Marc Erich Latoschik,
Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality
, In
Frontiers in Virtual Reality
, Vol.
6
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kullmann2025eyetoeye,
title = {Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality},
author = {Kullmann, Peter and Schell, Theresa and Botsch, Mario and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2025},
volume = {6},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1594350},
doi = {10.3389/frvir.2025.1594350}
}
Abstract: In co-located extended reality (XR) experiences, headsets occlude their wearers’ facial expressions, impeding natural conversation. We introduce two techniques to mitigate this using off-the-shelf hardware: compositing a view of a personalized avatar behind the visor (“see-through visor”) and reducing the headset’s visibility and showing the avatar’s head (“head substitution”). We evaluated them in a repeated-measures dyadic study (N = 25) that indicated promising effects. Collaboration with a confederate with our techniques, compared to a no-avatar baseline, resulted in quicker consensus in a judgment task and enhanced perceived mutual understanding. However, the avatar was also rated and commented on as uncanny, though participant comments indicate tolerance for avatar uncanniness since they restore gaze utility. Furthermore, performance in an executive task deteriorated in the presence of our techniques, indicating that our implementation drew participants’ attention to their partner’s avatar and away from the task. We suggest giving users agency over how these techniques are applied and recommend using the same representation across interaction partners to avoid power imbalances.
2023
Peter Kullmann, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
An Evaluation of Other-Avatar Facial Animation Methods for Social VR
, In
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
, pp. 1-7
.
New York, NY, USA
:
Association for Computing Machinery
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kullmann2023facialExpressionComparison,
title = {An Evaluation of Other-Avatar Facial Animation Methods for Social VR},
author = {Kullmann, Peter and Menzel, Timo and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
year = {2023},
pages = {1--7},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-Kullmann-An-Evaluation-of-Other-Avatar-Facial-Animation-Methods-for-Social-VR.pdf},
doi = {10.1145/3544549.3585617}
}
Abstract: We report a mixed-design study on the effect of facial animation method (static, synthesized, or tracked expressions) and its synchronization to speaker audio (in sync or delayed by the method’s inherent latency) on an avatar’s perceived naturalness and plausibility. We created a virtual human for an actor and recorded his spontaneous half-minute responses to conversation prompts. As a simulated immersive interaction, 44 participants unfamiliar with the actor observed and rated performances rendered with the avatar, each with the different facial animation methods. Half of them observed performances in sync and the others with the animation method’s latency. Results show audio synchronization did not influence ratings and static faces were rated less natural and less plausible than animated faces. Notably, synthesized expressions were rated as more natural and more plausible than tracked expressions. Moreover, ratings of verbal behavior naturalness differed in the same way. We discuss implications of these results for avatar-mediated communication.
2022
Chiara Palmisano, Peter Kullmann, Ibrahem Hanafi, Marta Verrecchia, Marc Erich Latoschik, Andrea Canessa, Martin Fischbach, Ioannis Ugo Isaias,
A Fully-Immersive Virtual Reality Setup to Study Gait Modulation
, In
Frontiers in Human Neuroscience
, Vol.
16
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/fnhum.2022.783452,
title = {A Fully-Immersive Virtual Reality Setup to Study Gait Modulation},
author = {Palmisano, Chiara and Kullmann, Peter and Hanafi, Ibrahem and Verrecchia, Marta and Latoschik, Marc Erich and Canessa, Andrea and Fischbach, Martin and Isaias, Ioannis Ugo},
journal = {Frontiers in Human Neuroscience},
year = {2022},
volume = {16},
url = {https://www.frontiersin.org/article/10.3389/fnhum.2022.783452},
doi = {10.3389/fnhum.2022.783452}
}
Abstract: Objective: Gait adaptation to environmental challenges is fundamental for independent and safe community ambulation. The possibility of precisely studying gait modulation using standardized protocols of gait analysis closely resembling everyday life scenarios is still an unmet need.Methods: We have developed a fully-immersive virtual reality (VR) environment where subjects have to adjust their walking pattern to avoid collision with a virtual agent (VA) crossing their gait trajectory. We collected kinematic data of 12 healthy young subjects walking in real world (RW) and in the VR environment, both with (VR/A+) and without (VR/A-) the VA perturbation. The VR environment closely resembled the RW scenario of the gait laboratory. To ensure standardization of the obstacle presentation the starting time speed and trajectory of the VA were defined using the kinematics of the participant as detected online during each walking trial.Results: We did not observe kinematic differences between walking in RW and VR/A-, suggesting that our VR environment per se might not induce significant changes in the locomotor pattern. When facing the VA all subjects consistently reduced stride length and velocity while increasing stride duration. Trunk inclination and mediolateral trajectory deviation also facilitated avoidance of the obstacle.Conclusions: This proof-of-concept study shows that our VR/A+ paradigm effectively induced a timely gait modulation in a standardized immersive and realistic scenario. This protocol could be a powerful research tool to study gait modulation and its derangements in relation to aging and clinical conditions.
2021
Florian Kern, Matthias Popp, Peter Kullmann, Elisabeth Ganal, Marc Erich Latoschik,
3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus
, In
27th ACM Symposium on Virtual Reality Software and Technology
, pp. 1-3
.
Osaka, Japan
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kern2021printing,
title = {3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus},
author = {Kern, Florian and Popp, Matthias and Kullmann, Peter and Ganal, Elisabeth and Latoschik, Marc Erich},
booktitle = {27th ACM Symposium on Virtual Reality Software and Technology},
year = {2021},
pages = {1-3},
publisher = {Association for Computing Machinery},
address = {Osaka, Japan},
url = {https://doi.org/10.1145/3489849.3489949},
doi = {10.1145/3489849.3489949}
}
Abstract: This article introduces the accessory dock, a 3D printed multipurpose extension for consumer-grade XR controllers that enables flexible mounting of self-made and commercial accessories. The uniform design of our concept opens new opportunities for XR systems being used for more diverse purposes, e.g., researchers and practitioners could use and compare arbitrary XR controllers within their experiments while ensuring access to buttons and battery housing. As a first example, we present a stylus tip accessory to build an XR Stylus, which can be directly used with frameworks for handwriting, sketching, and UI interaction on physically aligned virtual surfaces. For new XR controllers, we provide instructions on how to adjust the accessory dock to the controller’s form factor. A video tutorial for the construction and the source files for 3D printing are publicly available for reuse, replication, and extension (https://go.uniwue.de/hci-otss-accessory-dock).
Florian Kern, Peter Kullmann, Elisabeth Ganal, Kristof Korwisi, Rene Stingl, Florian Niebling, Marc Erich Latoschik,
Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces
, In
Frontiers in Virtual Reality
Daniel Zielasko (Ed.),
, Vol.
2
, p. 69
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kern2021offtheshelf,
title = {Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces},
author = {Kern, Florian and Kullmann, Peter and Ganal, Elisabeth and Korwisi, Kristof and Stingl, Rene and Niebling, Florian and Latoschik, Marc Erich},
editor = {Zielasko, Daniel},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {69},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2021.684498},
doi = {10.3389/frvir.2021.684498}
}
Abstract: This article introduces the Off-The-Shelf Stylus (OTSS), a framework for 2D interaction (in 3D) as well as for handwriting and sketching with digital pen, ink, and paper on physically aligned virtual surfaces in Virtual, Augmented, and Mixed Reality (VR, AR, MR: XR for short). OTSS supports self-made XR styluses based on consumer-grade six-degrees-of-freedom XR controllers and commercially available styluses. The framework provides separate modules for three basic but vital features: 1) The stylus module provides stylus construction and calibration features. 2) The surface module provides surface calibration and visual feedback features for virtual-physical 2D surface alignment using our so-called 3ViSuAl procedure, and surface interaction features. 3) The evaluation suite provides a comprehensive test bed combining technical measurements for precision, accuracy, and latency with extensive usability evaluations including handwriting and sketching tasks based on established visuomotor, graphomotor, and handwriting research. The framework’s development is accompanied by an extensive open source reference implementation targeting the Unity game engine using an Oculus Rift S headset and Oculus Touch controllers. The development compares three low-cost and low-tech options to equip controllers with a tip and includes a web browser-based surface providing support for interacting, handwriting, and sketching. The evaluation of the reference implementation based on the OTSS framework identified an average stylus precision of 0.98 mm (SD = 0.54 mm) and an average surface accuracy of 0.60 mm (SD = 0.32 mm) in a seated VR environment. The time for displaying the stylus movement as digital ink on the web browser surface in VR was 79.40 ms on average (SD = 23.26 ms), including the physical controller’s motion-to-photon latency visualized by its virtual representation (M = 42.57 ms, SD = 15.70 ms). The usability evaluation (N = 10) revealed a low task load, high usability, and high user experience. Participants successfully reproduced given shapes and created legible handwriting, indicating that the OTSS and it’s reference implementation is ready for everyday use. We provide source code access to our implementation, including stylus and surface calibration and surface interaction features, making it easy to reuse, extend, adapt and/or replicate previous results (https://go.uniwue.de/hci-otss).
Andrea Bartl, Sungchul Jung, Peter Kullmann, Stephan Wenninger, Jascha Achenbach, Erik Wolf, Christian Schell, Robert W. Lindeman, Mario Botsch, Marc Erich Latoschik,
Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match
, In
2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 565-566
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{bartl2021selfavatars,
title = {Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match},
author = {Bartl, Andrea and Jung, Sungchul and Kullmann, Peter and Wenninger, Stephan and Achenbach, Jascha and Wolf, Erik and Schell, Christian and Lindeman, Robert W. and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2021},
pages = {565-566},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-deliberateness-contextmatch-poster-preprint.pdf},
doi = {10.1109/VRW52623.2021.00165}
}
Abstract: The illusion of virtual body ownership (VBO) plays a critical role in virtual reality (VR). VR applications provide a broad design space which includes contextual aspects of the virtual surroundings as well as user-driven deliberate choices of their appearance in VR potentially influencing VBO and other well-known effects of VR. We propose a protocol for an experiment to investigate the influence of deliberateness and context-match on VBO and presence. In a first study, we found significant interactions with the environment. Based on our results we derive recommendations for future experiments.
2020
Daniel Roth, Mathis Jording, Tobias Schmee, Peter Kullmann, Nassir Navab, Kai Vogeley,
Towards Computer Aided Diagnosis of Autism Spectrum Disorder Using Virtual Environments
, In
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
, pp. 115-122
.
IEEE
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{9319100,
title = {Towards Computer Aided Diagnosis of Autism Spectrum Disorder Using Virtual Environments},
author = {Roth, Daniel and Jording, Mathis and Schmee, Tobias and Kullmann, Peter and Navab, Nassir and Vogeley, Kai},
booktitle = {2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)},
year = {2020},
pages = {115-122},
publisher = {IEEE},
url = {https://doi.org/10.1109/AIVR50618.2020.00029},
doi = {10.1109/AIVR50618.2020.00029}
}
Abstract: Autism Spectrum Disorders (ASD) are neurodevelopmental disorders that are associated with characteristic difficulties to express and interpret nonverbal behavior, such as social gaze behavior. The state of the art in diagnosis is the clinical interview that is time intensive for the clinicians and does not take into account any objective measures of behavior. We herewith propose an empirical approach that can potentially support diagnosis based on the assessment of nonverbal behavior in avatar-mediated interactions in virtual environments. In a first study, ASD individuals and a typically developed control group were interacting in dyads. Head motion, and eye gaze of both interlocutors were recorded, replicated to the avatars and displayed to the partner through a distributed virtual environment. The nonverbal behavior of both interaction partners was recorded, and resulting preprocessed data was classified with up to 92.9parcent classification accuracy, with the amount of eye area focus and the average horizontal gaze change being the most relevant features. We expect that such systems could improve the diagnostic assessment on the basis of objective measures of nonverbal behavior.
2019
Daniel Roth, Gary Bente, Peter Kullmann, David Mal, Christian Felix Purps, Kai Vogeley, Marc Erich Latoschik,
Technologies for Social Augmentations in User-Embodied Virtual Reality
, In
25th ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 1-12
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@conference{roth2019technologies,
title = {Technologies for Social Augmentations in User-Embodied Virtual Reality},
author = {Roth, Daniel and Bente, Gary and Kullmann, Peter and Mal, David and Purps, Christian Felix and Vogeley, Kai and Latoschik, Marc Erich},
booktitle = {25th ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2019},
pages = {1-12},
url = {https://dl.acm.org/doi/pdf/10.1145/3359996.3364269},
doi = {https://doi.org/10.1145/3359996.3364269}
}
Abstract: Technologies for Virtual, Mixed, and Augmented Reality (VR, MR, and AR) allow to artificially augment social interactions and thus to go beyond what is possible in real life. Motivations for the use of social augmentations are manifold, for example, to synthesize behavior when sensory input is missing, to provide additional affordances
in shared environments, or to support inclusion and training of individuals with social communication disorders. We review and categorize augmentation approaches and propose a software architecture based on four data layers. Three components further handle the status analysis, the modification, and the blending of behaviors. We present a prototype (injectX) that supports behavior tracking (body motion, eye gaze, and facial expressions from the lower face), status analysis, decision-making, augmentation, and behavior blending in immersive interactions. Along with a critical reflection, we consider further technical and ethical aspects.
2018
Daniel Roth, Peter Kullmann, Gary Bente, Dominik Gall, Marc Erich Latoschik,
Effects of Hybrid and Synthetic Social Gaze
in Avatar-Mediated Interactions
, In
Adjunct Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 103-108
.
IEEE, ACM
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2018effects,
title = {Effects of Hybrid and Synthetic Social Gaze
in Avatar-Mediated Interactions},
author = {Roth, Daniel and Kullmann, Peter and Bente, Gary and Gall, Dominik and Latoschik, Marc Erich},
booktitle = {Adjunct Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2018},
pages = {103-108},
publisher = {IEEE, ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ismar-augmentedgaze-roth-preprint.pdf}
}
Abstract: Human gaze is a crucial element in social interactions and therefore
an important topic for social Augmented, Mixed, and Virtual Reality
(AR,MR,VR) applications. In this paper we systematically compare
four modes of gaze transmission: (1) natural gaze, (2) hybrid gaze,
which combines natural gaze transmission with a social gaze model,
(3) synthesized gaze, which combines a random gaze transmission
with a social gaze model, and (4) purely random gaze. Investigating
dyadic interactions, results show a linear trend for the perception
of virtual rapport, trust, and interpersonal attraction, suggesting
that these measures increase with higher naturalness and social
adequateness of the transmission mode. We further investigated the
perception of realism as well as the resulting gaze behavior of the
avatars and the human participants. We discuss these results and
their implications.
Daniel Roth, David Mal, Christian Felix Purps, Peter Kullmann, Marc Erich Latoschik,
Injecting Nonverbal Mimicry with Hybrid Avatar-Agent
Technologies: A Naïve Approach
, In
Proceedings of the 6th ACM Symposium on Spatial User Interaction (SUI)
, pp. 69-73
.
ACM
, 2018.
Honorable mention award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2018injecting,
title = {Injecting Nonverbal Mimicry with Hybrid Avatar-Agent
Technologies: A Naïve Approach},
author = {Roth, Daniel and Mal, David and Purps, Christian Felix and Kullmann, Peter and Latoschik, Marc Erich},
booktitle = {Proceedings of the 6th ACM Symposium on Spatial User Interaction (SUI)},
year = {2018},
pages = {69-73},
publisher = {ACM},
note = {Honorable mention award 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-sui-mimicry-roth-preprint.pdf},
doi = {https://doi.org/10.1145/3267782.3267791}
}
Abstract: Humans communicate to a large degree through nonverbal behavior.
Nonverbal mimicry, i.e., the imitation of another’s behavior can
positively affect the social interactions. In virtual environments,
user behavior can be replicated to avatars, and agent behaviors can
be artificially constructed. By combining both, hybrid avatar-agent
technologies aim at actively mediating virtual communication to
foster interpersonal understanding and rapport.We present a naïve
prototype, the “Mimicry Injector”, that injects artificial mimicry
in real-time virtual interactions. In an evaluation study, two participants
were embodied in a Virtual Reality (VR) simulation, and
had to perform a negotiation task. Their virtual characters either a)
replicated only the original behavior or b) displayed the original
behavior plus induced mimicry. We found that most participants
did not detect the modification. However, the modification did not
have a significant impact on the perception of the communication.
2017
Peter Kullmann, Roman Eyck, Marc Erich Latoschik, Daniel Roth,
Augmenting Human Gaze in Avatar-Mediated Communication (Poster)
.
Poster presentations, Interdisciplinary College
, 2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{kullmann2017augmenting,
title = {Augmenting Human Gaze in Avatar-Mediated Communication (Poster)},
author = {Kullmann, Peter and Eyck, Roman and Latoschik, Marc Erich and Roth, Daniel},
year = {2017},
publisher = {Poster presentations, Interdisciplinary College},
url = {}
}
Abstract: Future social virtual environments will allow machines to transcend face-to-face interaction and manipulate human social interactions in computer-mediated communication. Whereas in real-world interactions we use our real bodies as means to mediate our message and communicate additional information, in virtual environments we are represented by avatars. Progressing from Mori’s Uncanny Valley theory 1 and similar to "the medium is the message" 2 approaches, many research activities have investigated the impact of manipulating the appearance realism of virtual characters 3. However, the impact of behavioural realism and its potential augmentation in avatar-mediated communication, is not fully understood.
The present work in progress investigates the impact of behaviour most likely disclosing human comprehension: gaze. Gaze cues are reciprocal nonverbal signals that are used to both detect information about interlocutors and communicate to them 4. Where our conversational partners focus their attention, is relevant for building rapport and interaction naturalness 5.
In an avatar-mediated communication system prototype we examine how augmenting avatars\u0027 gaze behaviour influences the gaze behaviour of humans communicating via avatars and their rating of the communication quality. We let our avatars make eye contact whenever the human it is facing speaks. We hypothesise that this will increase the quality of the social interaction and will lead to the participants acting more attentive with regard to their gaze behaviour. Furthermore, we aim to identify a quantitative impact between the degree of aforementioned manipulation and the acceptance of the communicative counterpart as human or machine. While participants adjusting their gaze behaviour in relation to the avatar augmentation implies great opportunities for using avatar-mediated communication in therapeutic applications, its ethical implications have to be addressed.