Human-Computer Interaction
Erik Wolf successfully defended his PhD-Thesis
We can now call him Doctor Wolf.
Summer EXPO 2024 Recap
The Summer EXPO 2024 for HCI/HCS, CS and GE was a great success! A large number of visitors were able to experience up to 120 different demos and projects.
HiAvA @ BMBF
HiAvA at the final meeting of the BMBF-funded XR research consortia
Summer Expo 2024 Invitation
This year's summer expo is on the 19th of July 2024. Feel free to visit and experience a lot of interesting projects.
AI and eXtended Reality at the Medienstudierendentagung
The HCI Chair and PIIS working group showcased innovative research at the Medienstudierendentagung (MeStuTa)
Show more

Open Positions

Wissenschaftliche:r Mitarbeiter:in (m/w/d) für AIL AT WORK Projekt gesucht
Wir haben eine offene Stelle im wissenschaftlichen Dienst für das AIL AT WORK Projekt.


Recent Publications

Smi Hinterreiter, Martin Wessel, Fabian Schliski, Isao Echizen, Marc Erich Latoschik, Timo Spinde, NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback, In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 19. 2025. Conditionally accepted for publication
[Download] [BibSonomy]
@article{hinterreiter2025newsunfold, author = {Smi Hinterreiter and Martin Wessel and Fabian Schliski and Isao Echizen and Marc Erich Latoschik and Timo Spinde}, journal = {Proceedings of the International AAAI Conference on Web and Social Media}, url = {https://arxiv.org/abs/2407.17045}, year = {2025}, volume = {19}, title = {NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback} }
Abstract: Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.
Jinghuai Lin, Christian Rack, Carolin Wienrich, Marc Erich Latoschik, Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE Computer Society, 2024. Accepted for publication
[Download] [BibSonomy]
@inproceedings{lin2024usability, author = {Jinghuai Lin and Christian Rack and Carolin Wienrich and Marc Erich Latoschik}, url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-social-vr-identity-management-preprint.pdf}, year = {2024}, booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)}, publisher = {IEEE Computer Society}, title = {Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality} }
Abstract:
Christian Merz, Jonathan Tschanter, Florian Kern, Jean-Luc Lugrin, Carolin Wienrich, Marc Erich Latoschik, Pipelining Processors for Decomposing Character Animation, In 30th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA: Association for Computing Machinery, 2024.
[Download] [BibSonomy] [Doi]
@inproceedings{merz2024processor, author = {Christian Merz and Jonathan Tschanter and Florian Kern and Jean-Luc Lugrin and Carolin Wienrich and Marc Erich Latoschik}, url = {https://doi.org/10.1145/3641825.3689533}, year = {2024}, booktitle = {30th ACM Symposium on Virtual Reality Software and Technology}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, series = {VRST '24}, doi = {10.1145/3641825.3689533}, title = {Pipelining Processors for Decomposing Character Animation} }
Abstract: This paper presents an openly available implementation of a modular pipeline architecture for character animation. It effectively decomposes frequently necessary processing steps into dedicated character processors, such as copying data from various motion sources, applying inverse kinematics, or scaling the character. Processors can easily be parameterized, extended (e.g., with AI), and freely arranged or even duplicated in any order necessary, greatly reducing side effects and fostering fine-tuning, maintenance, and reusability of the complex interplay of real-time animation steps.
Christian Merz, Carolin Wienrich, Marc Erich Latoschik, Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE Computer Society, 2024. Accepted for publication
[Download] [BibSonomy]
@inproceedings{merz2024voice, author = {Christian Merz and Carolin Wienrich and Marc Erich Latoschik}, url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-does-voice-matter-preprint.pdf}, year = {2024}, booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)}, publisher = {IEEE Computer Society}, title = {Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR} }
Abstract: This work evaluates how the asymmetry of device configurations and verbal communication influence the user experience of social eXtended Reality (XR) for self-perception, other-perception, and task perception. We developed an application that enables social collaboration between two users with varying device configurations. We compare the conditions of one symmetric interaction, where both device configurations are Head-Mounted Displays (HMDs) with tracked controllers, with the conditions of one asymmetric interaction, where one device configuration is an HMD with tracked controllers and the other device configuration is a desktop screen with a mouse. In our study, 52 participants collaborated in a dyadic interaction on a sorting task while talking to each other. We compare our results to previous work that evaluated the same scenario without verbal communication. In line with prior research, self-perception is influenced by the immersion of the used device configuration and verbal communication. While co-presence was not affected by the device configuration or the inclusion of verbal communication, social presence was only higher for HMD configurations that allowed verbal communication. Task perception was hardly affected by the device configuration or verbal communication. We conclude that the device in social XR is important for self-perception with or without verbal communication. However, the results indicate that the device configuration only affects the qualities of social interaction in collaborative scenarios when verbal communication is enabled. To sum up, asymmetric collaboration maintains the high quality of self-perception and interaction for highly immersed users while still enabling the participation of less immersed users.
Samantha Monty, Florian Kern, Marc Erich Latoschik, Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and User Experience in Design Ideation Tasks, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE Computer Society, 2024. Accepted for publication
[BibSonomy]
@inproceedings{monty2024, author = {Samantha Monty and Florian Kern and Marc Erich Latoschik}, year = {2024}, booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)}, publisher = {IEEE Computer Society}, title = {Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and User Experience in Design Ideation Tasks} }
Abstract: Immersive 3D sketching systems empower users with tools to create sketches directly in the air around themselves, in all three dimensions, using only simple hand gestures. These sketching systems have the potential to greatly extend the interactive capabilities of immersive learning environments. The perceptual challenges of Virtual Reality (VR), however, combined with the ergonomic and cognitive challenges of creating mid-air 3D sketches reduce the effectiveness of immersive sketching used for problem-solving, reflection, and to capture fleeting ideas. We contribute to the understanding of the potential challenges of mid-air sketching systems in educational settings, where expression is valued higher than accuracy, and sketches are used to support problem-solving and to explain abstract concepts. We conducted an empirical study with 36 participants with different spatial abilities to investigate if the way that people sketch in mid-air is dependent on the goal of the sketch. We compare the technique, quality, efficiency, and experience of participants as they create 3D mid-air sketches in three different tasks. We examine how users approach mid-air sketching when the sketches they create serve to convey meaning and when sketches are merely reproductions of geometric models created by someone else. We found that in tasks aimed at expressing personal design ideas, between starting and ending strokes, participants moved their heads more and their controllers at higher velocities and created strokes in faster times than in tasks aimed at recreating 3D geometric figures. They reported feeling less time pressure to complete sketches but redacted a larger percentage of strokes. These findings serve to inform the design of creative virtual environments that support reasoning and reflection through mid-air sketching. With this work, we aim to strengthen the power of immersive systems that support midair 3D sketching by exploiting natural user behavior to assist users to more quickly and faithfully convey their meaning in sketches.
Olaf Clausen, Martin Mišiak, Arnulph Fuhrmann, Ricardo Marroquim, Marc Erich Latoschik, A Practical Real-Time Model for Diffraction on Rough Surfaces, In Journal of Computer Graphics Techniques, Vol. 13(1), pp. 1-27. 2024.
[Download] [BibSonomy]
@article{clausen2024practical, author = {Olaf Clausen and Martin Mišiak and Arnulph Fuhrmann and Ricardo Marroquim and Marc Erich Latoschik}, journal = {Journal of Computer Graphics Techniques}, number = {1}, url = {https://jcgt.org/published/0013/01/01/}, year = {2024}, pages = {1-27}, volume = {13}, title = {A Practical Real-Time Model for Diffraction on Rough Surfaces} }
Abstract: Wave optics phenomena have a significant impact on the visual appearance of rough conductive surfaces even when illuminated with partially coherent light. Recent models address these phenomena, but none is real-time capable due to the complexity of the underlying physics equations. We provide a practical real-time model, building on the measurements and model by Clausen et al. 2023, that approximates diffraction-induced wavelength shifts and speckle patterns with only a small computational overhead compared to the popular Cook-Torrance GGX model. Our model is suitable for Virtual Reality applications, as it contains domain-specific improvements to address the issues of aliasing and highlight disparity.
Timo Menzel, Erik Wolf, Stephan Wenninger, Niklas Spinczyk, Lena Holderrieth, Ulrich Schwanecke, Marc Erich Latoschik, Mario Botsch, WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild, In TechRxiv. 2024. Preprint
[Download] [BibSonomy] [Doi]
@article{menzel2024wildavatars, author = {Timo Menzel and Erik Wolf and Stephan Wenninger and Niklas Spinczyk and Lena Holderrieth and Ulrich Schwanecke and Marc Erich Latoschik and Mario Botsch}, journal = {TechRxiv}, url = {https://d197for5662m48.cloudfront.net/documents/publicationstatus/221002/preprint_pdf/475c2f7830adb5d85a17466ac50bc9c5.pdf}, year = {2024}, doi = {10.36227/techrxiv.172503940.07538627/v1}, title = {WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild} }
Abstract: Realistic full-body avatars play a key role in representing users in virtual environments, where they have been shown to considerably improve body ownership and presence. Driven by the growing demand for realistic virtual humans, extensive research on scanning-based avatar reconstruction has been conducted in recent years. Most methods, however, require complex hardware, such as expensive camera rigs and/or controlled capture setups, thereby restricting avatar generation to specialized labs. We propose WILDAVATARS, an approach that empowers even non-experts without access to complex equipment to capture realistic avatars in the wild. Our avatar generation is based on an easy-to-use smartphone application that guides the user through the scanning process and uploads the captured data to a server, which in a fully automatic manner reconstructs a photorealistic avatar that is ready to be downloaded into a VR application. To increase the availability and foster the use of realistic virtual humans in VR applications we will make WILDAVATARS publicly available for research purposes.
Sophia Maier, Sebastian Oberdörfer, Marc Erich Latoschik, Ballroom Dance Training with Motion Capture and Virtual Reality, In Proceedings of Mensch Und Computer 2024 (MuC '24), pp. 617-621. New York, NY, USA: Association for Computing Machinery, 2024.
[Download] [BibSonomy] [Doi]
@inproceedings{maier2024ballroom, author = {Sophia Maier and Sebastian Oberdörfer and Marc Erich Latoschik}, url = {https://dl.acm.org/doi/10.1145/3670653.3677499}, year = {2024}, booktitle = {Proceedings of Mensch Und Computer 2024 (MuC '24)}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, pages = {617-621}, doi = {10.1145/3670653.3677499}, title = {Ballroom Dance Training with Motion Capture and Virtual Reality} }
Abstract: This paper investigates the integration of motion capture and virtual reality (VR) technologies in competitive ballroom dancing (slow walz, tango, slow foxtrott, viennese waltz, quickstep), aiming to analyze posture correctness and provide feedback to dancers for posture enhancement. Through qualitative interviews, the study identifies specific requirements and gathers insights into potentially helpful feedback mechanisms. Using Unity and motion capture technology, we implemented a prototype system featuring real-time visual cues for posture correction and a replay function for analysis. A validation study with competitive ballroom dancers reveals generally positive feedback on the system’s usefulness, though challenges like cable obstruction and bad usability of the user interface are noted. Insights from participants inform future refinements, emphasizing the need for precise feedback, cable-free movement, and user-friendly interfaces. While the program is promising for ballroom dance training, further research is needed to evaluate the system’s overall efficacy.
See all publications here
Legal Information