The Summer EXPO 2024 for HCI/HCS, CS and GE was a great success! A large number of visitors were able to experience up to 120 different demos and projects.
This year's summer expo is on the 19th of July 2024. Feel free to visit and experience a lot of interesting projects.
The HCI Chair and PIIS working group showcased innovative research at the Medienstudierendentagung (MeStuTa)
The Girls' Day took place on April 25th, 2024, and was a great success! Together with the XR Hum Nuremberg we conducted parallel workshops where the girls got familiar with XR technologies and learned about the background of designing XR experiences.
Open Positions
Wir haben eine offene Stelle im wissenschaftlichen Dienst für das AIL AT WORK Projekt.
Recent Publications
NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback, In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 19.
2025. Conditionally accepted for publication
[Download] [BibSonomy]
,
[Download] [BibSonomy]
@article{hinterreiter2025newsunfold,
author = {Smi Hinterreiter and Martin Wessel and Fabian Schliski and Isao Echizen and Marc Erich Latoschik and Timo Spinde},
journal = {Proceedings of the International AAAI Conference on Web and Social Media},
url = {https://arxiv.org/abs/2407.17045},
year = {2025},
volume = {19},
title = {NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback}
}
Abstract:
Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.
Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024.
[BibSonomy]
,
[BibSonomy]
@inproceedings{monty2024,
author = {Samantha Monty and Florian Kern and Marc Erich Latoschik},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks}
}
Abstract:
Immersive 3D sketching systems empower users with tools to create
sketches directly in the air around themselves, in all three dimensions,
using only simple hand gestures. These sketching systems
have the potential to greatly extend the interactive capabilities
of immersive learning environments. The perceptual challenges of
Virtual Reality (VR), however, combined with the ergonomic and
cognitive challenges of creating mid-air 3D sketches reduce the effectiveness
of immersive sketching used for problem-solving, reflection,
and to capture fleeting ideas. We contribute to the understanding
of the potential challenges of mid-air sketching systems in
educational settings, where expression is valued higher than accuracy,
and sketches are used to support problem-solving and to explain
abstract concepts. We conducted an empirical study with 36
participants with different spatial abilities to investigate if the way
that people sketch in mid-air is dependent on the goal of the sketch.
We compare the technique, quality, efficiency, and experience of
participants as they create 3D mid-air sketches in three different
tasks. We examine how users approach mid-air sketching when the
sketches they create serve to convey meaning and when sketches are
merely reproductions of geometric models created by someone else.
We found that in tasks aimed at expressing personal design ideas,
between starting and ending strokes, participants moved their heads
more and their controllers at higher velocities and created strokes
in faster times than in tasks aimed at recreating 3D geometric figures.
They reported feeling less time pressure to complete sketches
but redacted a larger percentage of strokes. These findings serve to
inform the design of creative virtual environments that support reasoning
and reflection through mid-air sketching. With this work, we
aim to strengthen the power of immersive systems that support midair
3D sketching by exploiting natural user behavior to assist users
to more quickly and faithfully convey their meaning in sketches.
A Practical Real-Time Model for Diffraction on Rough Surfaces, In Journal of Computer Graphics Techniques, Vol. 13(1), pp. 1-27.
2024.
[Download] [BibSonomy]
,
[Download] [BibSonomy]
@article{clausen2024practical,
author = {Olaf Clausen and Martin Mišiak and Arnulph Fuhrmann and Ricardo Marroquim and Marc Erich Latoschik},
journal = {Journal of Computer Graphics Techniques},
number = {1},
url = {https://jcgt.org/published/0013/01/01/},
year = {2024},
pages = {1-27},
volume = {13},
title = {A Practical Real-Time Model for Diffraction on Rough Surfaces}
}
Abstract:
Wave optics phenomena have a significant impact on the visual appearance of rough conductive surfaces even when illuminated with partially coherent light. Recent models address these phenomena, but none is real-time capable due to the complexity of the underlying physics equations. We provide a practical real-time model, building on the measurements and model by Clausen et al. 2023, that approximates diffraction-induced wavelength shifts and speckle patterns with only a small computational overhead compared to the popular Cook-Torrance GGX model. Our model is suitable for Virtual Reality applications, as it contains domain-specific improvements to address the issues of aliasing and highlight disparity.
WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild, In TechRxiv.
2024. Preprint
[Download] [BibSonomy] [Doi]
,
[Download] [BibSonomy] [Doi]
@article{menzel2024wildavatars,
author = {Timo Menzel and Erik Wolf and Stephan Wenninger and Niklas Spinczyk and Lena Holderrieth and Ulrich Schwanecke and Marc Erich Latoschik and Mario Botsch},
journal = {TechRxiv},
url = {https://d197for5662m48.cloudfront.net/documents/publicationstatus/221002/preprint_pdf/475c2f7830adb5d85a17466ac50bc9c5.pdf},
year = {2024},
doi = {10.36227/techrxiv.172503940.07538627/v1},
title = {WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild}
}
Abstract:
Realistic full-body avatars play a key role in representing users in virtual environments, where they have been shown to considerably improve body ownership and presence. Driven by the growing demand for realistic virtual humans, extensive research on scanning-based avatar reconstruction has been conducted in recent years. Most methods, however, require complex hardware, such as expensive camera rigs and/or controlled capture setups, thereby restricting avatar generation to specialized labs. We propose WILDAVATARS, an approach that empowers even non-experts without access to complex equipment to capture realistic avatars in the wild. Our avatar generation is based on an easy-to-use smartphone application that guides the user through the scanning process and uploads the captured data to a server, which in a fully automatic manner reconstructs a photorealistic avatar that is ready to be downloaded into a VR application. To increase the availability and foster the use of realistic virtual humans in VR applications we will make WILDAVATARS publicly available for research purposes.
Ballroom Dance Training with Motion Capture and Virtual Reality, In Proceedings of Mensch Und Computer 2024 (MuC '24), pp. 617-621. New York, NY, USA:
Association for Computing Machinery,
2024.
[Download] [BibSonomy] [Doi]
,
[Download] [BibSonomy] [Doi]
@inproceedings{maier2024ballroom,
author = {Sophia Maier and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {https://dl.acm.org/doi/10.1145/3670653.3677499},
year = {2024},
booktitle = {Proceedings of Mensch Und Computer 2024 (MuC '24)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
pages = {617-621},
doi = {10.1145/3670653.3677499},
title = {Ballroom Dance Training with Motion Capture and Virtual Reality}
}
Abstract:
This paper investigates the integration of motion capture and virtual reality (VR) technologies in competitive ballroom dancing (slow walz, tango, slow foxtrott, viennese waltz, quickstep), aiming to analyze posture correctness and provide feedback to dancers for posture enhancement. Through qualitative interviews, the study identifies specific requirements and gathers insights into potentially helpful feedback mechanisms. Using Unity and motion capture technology, we implemented a prototype system featuring real-time visual cues for posture correction and a replay function for analysis. A validation study with competitive ballroom dancers reveals generally positive feedback on the system’s usefulness, though challenges like cable obstruction and bad usability of the user interface are noted. Insights from participants inform future refinements, emphasizing the need for precise feedback, cable-free movement, and user-friendly interfaces. While the program is promising for ballroom dance training, further research is needed to evaluate the system’s overall efficacy.
Pushing Yourself to the Limit - Influence of Emotional Virtual Environment Design on Physical Training in VR, In ACM Games.
2024. accepted for publication
[BibSonomy]
,
[BibSonomy]
@article{oberdorfer2024pushing,
author = {Sebastian Oberdörfer and Sophia C Steinhaeusser and Amiin Najjar and Clemens Tümmers and Marc Erich Latoschik},
journal = {ACM Games},
year = {2024},
title = {Pushing Yourself to the Limit - Influence of Emotional Virtual Environment Design on Physical Training in VR}
}
Abstract:
The design of virtual environments (VEs) can strongly influence users' emotions. These VEs are also an important aspect of immersive Virtual Reality (VR) exergames - training system that can inspire athletes to train in a highly motivated way and achieve a higher training intensity. VR-based training and rehabilitation systems can increase a user's motivation to train and to repeat physical exercises. The surrounding VE can potentially predominantly influence users' motivation and hence potentially even physical performance. Besides providing potentially motivating environments, physical training can be enhanced by gamification. However, it is unclear whether the surrounding VE of a VR-based physical training system influences the effectiveness of gamification. We investigate whether an emotional positive or emotional negative design influences the sport performance and interacts with the positive effects of gamification. In a user study, we immerse participants in VEs following either an emotional positive, neutral, or negative design and measure the duration the participants can hold a static strength-endurance exercise. The study targeted the investigation of the effects of 1) emotional VE design as well as the 2) presence and absence of gamification. We did not observe significant differences in the performance of the participants independent of the conditions of VE design or gamification. Gamification caused a dominating effect on emotion and motivation over the emotional design of the VEs, thus indicating an overall positive impact. The emotional design influenced the participants' intrinsic motivation but caused mixed results with respect to emotion. Overall, our results indicate the importance of using gamification, support the commonly used emotional positive VEs for physical training, but further indicate that the design space could also include other directions of VE design.
The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence, In 30th ACM Symposium on Virtual Reality Software and Technology (VRST).
ACM,
2024. Accepted for publication
[Download] [BibSonomy] [Doi]
,
[Download] [BibSonomy] [Doi]
@inproceedings{brubach2024influence,
author = {Larissa Brübach and Marius Röhm and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024_vrst_bruebach_peripheral_display_extension.pdf},
year = {2024},
booktitle = {30th ACM Symposium on Virtual Reality Software and Technology (VRST)},
publisher = {ACM},
doi = {10.1145/3641825.3687713},
title = {The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence}
}
Abstract:
The Field of View (FoV) is a central technical display characteristic of Head-Mounted Displays (HMDs), which has been shown to have a notable impact on important aspects of the user experience. For example, an increased FoV has been shown to foster a sense of presence and improve peripheral information processing, but it also increases the risk of VR sickness. This article investigates the impact of a wider but inhomogeneous FoV on the perceived plausibility, measuring its effects on presence, spatial presence, and VR sickness as a comparison to and replication of effects from prior work. We developed a low-resolution peripheral display extension to pragmatically increase the FoV, taking into account the lower peripheral acuity of the human eye. While this design results in inhomogeneous resolutions of HMDs at the display edges, it also is a low complexity and low-cost extension. However, its effects on important VR qualities have to be identified. We conducted two experiments with 30 and 27 participants, respectively. In a randomized 2x3 within-subject design, participants played three rounds of bowling in VR, both with and without the display extension. Two rounds contained incongruencies to induce breaks in plausibility. In experiment 2, we enhanced one incongruency to make it more noticeable and improved the shortcomings of the display extension that had previously been identified. However, neither study measured the low-resolution FoV extension's effect in terms of perceived plausibility, presence, spatial presence, or VR sickness. We found that one of the incongruencies could cause a break in plausibility without the extension, confirming the results of a previous study.
Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. Accepted for publication
[BibSonomy]
,
[BibSonomy]
@inproceedings{brubach2024manipulating,
author = {Larissa Brübach and Mona Röhm and Franziska Westermeier and Marc Erich Latoschik and Carolin Wienrich},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR}
}
Abstract:
This work presents a study where we used incongruencies on the cognitive and the perceptual layer to investigate their effects on perceived plausibility and, thereby, presence and spatial presence. We used a 2x3 within-subject design with the factors familiar size (cognitive manipulation) and immersion (perceptual manipulation). For the different levels of immersion, we implemented three different tracking qualities: rotation-and-translation tracking, rotation-only tracking, and stereoscopic-view-only tracking. Participants scanned products in a virtual supermarket where the familiar size of these objects was manipulated. Simultaneously, they could either move their head normally or need to use the thumbsticks to navigate their view of the environment. Results show that both manipulations had a negative effect on perceived plausibility and, thereby, presence. In addition, the tracking manipulation also had a negative effect on spatial presence. These results are especially interesting in light of the ongoing discussion about the role of plausibility and congruence in evaluating XR environments. The results can hardly be explained by traditional presence models, where immersion should not be an influencing factor for perceived plausibility. However, they are in agreement with the recently introduced Congruence and Plausibility (CaP) model and provide empirical evidence for the model's predicted pathways.