2026
Marie Luisa Fiedler, Christian Merz, Lukas Schach, Jonathan Tschanter, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Am I Still Me? Visual Congruence Across Reality–Virtuality and Avatar Appearance in Shaping Self-Perception and Behavior
, In
IEEE Transactions on Visualization and Computer Graphics
.
2026.
To be published.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{fiedler2026still,
title = {Am I Still Me? Visual Congruence Across Reality–Virtuality and Avatar Appearance in Shaping Self-Perception and Behavior},
author = {Fiedler, Marie Luisa and Merz, Christian and Schach, Lukas and Tschanter, Jonathan and Botsch, Mario and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
note = {To be published.},
url = {}
}
Abstract: This paper presents the first systematic investigation of how congruence in visual self-representation influences self-perception and behavior. We span a continuum from the physical self through avatars with graded self-similarity to clearly dissimilar avatars in virtual reality (VR). In a 1x4 within-user study, participants completed movement and quiz tasks in either physical reality or a digital twin environment in VR, where they embodied one of three avatars: a photorealistic self-similar avatar, a dissimilar same-gender avatar, or a dissimilar opposite-gender avatar. Subjective measures included presence, sense of embodiment, self-identification, and perceived change, and were complemented by an objective movement metric of behavioral change. Compared to physical reality, VR, even with a self-similar avatar, produced lower presence, a weaker sense of embodiment, and reduced self-identification, revealing a persistent gap in visual congruence. Within VR, self-similar avatars enhanced body ownership, self-location, and self-identification relative to dissimilar avatars. Conversely, dissimilar avatars produced measurable behavioral changes compared with self-similar ones. Gender cues, however, had little impact in gender-neutral tasks. Overall, the findings show that photorealistic self-similar avatars reinforce embodiment and self-identification. However, VR still falls short of achieving congruence with physical reality, underscoring key challenges for avatar realism and ecological validity.
Florian Kern, Lukas Polifke, Paula Friedrich, Marc Erich Latoschik, Carolin Wienrich, David Obremski,
CECA - A Configurable Framework for Embodied Conversational AI Agents in Extended Reality
, In
2026 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
.
2026.
To be published
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{kern2026configurable,
title = {CECA - A Configurable Framework for Embodied Conversational AI Agents in Extended Reality},
author = {Kern, Florian and Polifke, Lukas and Friedrich, Paula and Latoschik, Marc Erich and Wienrich, Carolin and Obremski, David},
booktitle = {2026 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2026},
note = {To be published},
url = {}
}
Abstract: We present CECA, a configurable framework for embodied conversational AI agents in Unity-based extended reality (XR) applications. CECA employs a client–server architecture to decouple agent logic from game engine–based embodiment. Built on LiveKit Agents, our approach integrates speech-to-text (STT), large language models (LLMs), and text-to-speech (TTS) into a unified, streaming voice-to-voice pipeline configured via metadata rather than code changes. We outline how this architecture flexibly integrates local and cloud AI providers while mitigating limited provider SDK support in Unity. Finally, we highlight opportunities for future work, including multi-agent scenarios, higher-level templates for XR research, and systematic user studies.
Murat Yalcin, Marc Erich Latoschik,
End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach
, In
Frontiers in Physiology
.
2026.
To be published
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{yalcin2026endtoend,
title = {End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach},
author = {Yalcin, Murat and Latoschik, Marc Erich},
journal = {Frontiers in Physiology},
year = {2026},
note = {To be published},
url = {https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2026.1694995/abstract}
}
Abstract: Electrocardiogram (ECG) signals are frequently utilized for detecting important cardiac events, such as variations in ECG intervals, as well as for monitoring essential physiological metrics, including heart rate (HR) and heart rate variability (HRV). However, the accurate measurement of ECG traditionally requires a clinical environment, thereby limiting its feasibility for continuous, everyday monitoring. In contrast, Photoplethysmography (PPG) offers a non-invasive, cost-effective optical method for capturing cardiac data in daily settings and is increasingly utilized in various clinical and commercial wearable devices. However, PPG measurements are significantly less detailed than those of ECG. In this study, we propose a novel approach to synthesize ECG signals from PPG signals, facilitating the generation of robust ECG waveforms using a simple, unobtrusive wearable setup. Our approach utilizes a Transformer-based Generative Adversarial Network model, designed to accurately capture ECG signal patterns and enhance generalization capabilities. Additionally, we incorporate self-supervised learning techniques to enable the model to learn diverse ECG patterns through specific tasks. Model performance is evaluated using various metrics, including heart rate calculation and root minimum squared error (RMSE) on two different datasets. The comprehensive performance analysis demonstrates that our model exhibits superior efficacy in generating accurate ECG signals (with reducing 83.9\% and 72.4\% of the heart rate calculation error on MIMIC III and Who is Alyx? datasets, respectively), suggesting its potential application in the healthcare domain to enhance heart rate prediction and overall cardiac monitoring. As an empirical proof of concept, we also present an Atrial Fibrillation (AF) detection task, showcasing the practical utility of the generated ECG signals for cardiac diagnostic applications. To encourage replicability and reuse in future ECG generation studies, we have shared the dataset and will also make the code as publicly available.
Jonathan Tschanter, Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
How Harassment Shapes Self-Perception and Well-Being in Social VR: Evidence from a Controlled Lab Study
, In
IEEE Transactions on Visualization and Computer Graphics
.
2026.
To be published
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{tschanter2026harassment,
title = {How Harassment Shapes Self-Perception and Well-Being in Social VR: Evidence from a Controlled Lab Study},
author = {Tschanter, Jonathan and Merz, Christian and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
note = {To be published},
url = {}
}
Abstract: Social Virtual Reality (SVR) allows users to meet and build relationships through embodied avatars and real-time interaction in virtual spaces. While embodiment can strengthen social connections and presence, it can also intensify negative encounters, making SVR particularly vulnerable to harassment. Despite frequent reports of verbal, visual, and "physical" violations in SVR, little is known about how harassment reshapes users' self-perception, including their sense of embodiment, self-identification, closeness, and avatar customization preferences. We conducted a controlled experiment with 52 participants who experienced either a neutral or a harassment condition in a scenario modeled after real SVR incidents. Participants perceived the harassing peer as significantly more negative, annoying, and disturbing than the neutral peer. Contrary to prior reports, harassment did not significantly affect well-being measures, including emotional state, self-esteem, and physiological arousal, within this controlled scenario. However, participants reported stronger bodily change, attributed more of their own attitudes and emotions to their avatars, and increased interpersonal distance when personal space was invaded. Self-reported coping strategies included ignoring, stepping back, using humor, and retaliating. Notably, avatar customization preferences shifted across conditions. Participants in the neutral condition favored personalized avatars, whereas those in the harassment condition more frequently preferred anonymity in public spaces. Together, these findings demonstrate that harassment in SVR not only exploits embodiment but also reshapes self-perception. We further contribute methodological insights into how harassment can be ethically and reproducibly studied in controlled SVR-like experiments.
Marie Luisa Fiedler, Christian Merz, Jonathan Tschanter, Carolin Wienrich, Marc Erich Latoschik,
Technological Advances in Two Generations of Consumer-Grade VR Systems: Effects on User Experience and Task Performance
.
2026.
[BibTeX]
[Download]
[BibSonomy]
@misc{fiedler2026technologicaladvances,
title = {Technological Advances in Two Generations of Consumer-Grade VR Systems: Effects on User Experience and Task Performance},
author = {Fiedler, Marie Luisa and Merz, Christian and Tschanter, Jonathan and Wienrich, Carolin and Latoschik, Marc Erich},
year = {2026},
url = {https://arxiv.org/abs/2601.09610}
}
Jonathan Tschanter, Christian Merz, Marie Luisa Fiedler, Carolin Wienrich, Marc Erich Latoschik,
Use Case Matters: Comparing the User Experience and Task Performance Across Tasks for Embodied Interaction in VR
, In
IEEE Transactions on Visualization and Computer Graphics
.
2026.
To be published
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{tschanter2026matters,
title = {Use Case Matters: Comparing the User Experience and Task Performance Across Tasks for Embodied Interaction in VR},
author = {Tschanter, Jonathan and Merz, Christian and Fiedler, Marie Luisa and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
note = {To be published},
url = {}
}
Abstract: Integrated Virtual Reality (IVR) systems are central to avatar-mediated use cases in Virtual Reality (VR), reconstructing users' movements on avatars. They differ primarily in their tracking architectures, which determine how completely and accurately users' movements are captured and reconstructed on avatars.
Many current IVR systems reduce user-worn hardware, trading reconstruction accuracy against cost and setup complexity, yet their impact on user experience and task performance across use cases remains underexplored. We compared three reduced user-worn IVR systems. Each system has distinct technical approaches: (1) Captury (markerless outside-in optical tracking), (2) Meta Movement SDK (markerless inside-out optical tracking), and (3) Vive Trackers (marker-based outside-in optical tracking with IMUs).
In a 3x5 mixed-design, participants performed five tasks, simulating different use cases, to probe distinct aspects of these systems. No system consistently outperformed the others. Meta excelled in hand-based, fast-paced interactions, while Captury and Vive performed better in lower-body tasks and during full-body pose observation. These findings underscore the need to evaluate reduced user-worn IVR systems within the specific use case. We offer practical guidance for system selection based on use-case demands and released our tasks as an open-source, extensible framework to support future evaluations for selecting IVR systems.
2025
Ronja Heinrich, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces
, In
Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)
.
Association for Computing Machinery
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{heinrich2025systematic,
title = {A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces},
author = {Heinrich, Ronja and Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)},
year = {2025},
publisher = {Association for Computing Machinery},
url = {https://dl.acm.org/doi/10.1145/3716553.3750790},
doi = {doi: 10.1145/3716553.3750790}
}
Abstract: This systematic review investigates the current state of research on multimodal fusion methods, i.e., the joint analysis of multimodal inputs, for intentional, instruction-based human-computer interactions, focusing on the combination of speech and spatially expressive modalities such as gestures, touch, pen, and gaze.
We examine 50 systems from a User-Centered Design perspective, categorizing them by modality combinations, fusion strategies, application domains and media, as well as reusability. Our findings highlight a predominance of descriptive late fusion methods, limited reusability, and a lack of standardized tool support, hampering rapid prototyping and broader applicability. We identify emerging trends in machine learning-based fusion and outline future research directions to advance reusable and user-centered multimodal systems.
Erik Göbel, Daniela Andres, Kristof Korwisi, Marc Erich Latoschik, Martin Hennecke,
Algorithmen erleben in Virtual Reality
, In
Digitale Medien in Lehr-Lern-Konzepten der Lehrpersonenbildung in interdisziplinärer Perspektive: Ergebnisse des Forschungsprojekts Connected Teacher Education
Angelika Füting-Lippert, Maria Eisenmann, Silke Grafe, Hans-Stefan Siller, Thomas Trefzger (Eds.),
, pp. 173-185
.
Wiesbaden
:
Springer Fachmedien Wiesbaden
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inbook{gobel:2025a,
title = {Algorithmen erleben in Virtual Reality},
author = {Göbel, Erik and Andres, Daniela and Korwisi, Kristof and Latoschik, Marc Erich and Hennecke, Martin},
editor = {Füting-Lippert, Angelika and Eisenmann, Maria and Grafe, Silke and Siller, Hans-Stefan and Trefzger, Thomas},
booktitle = {Digitale Medien in Lehr-Lern-Konzepten der Lehrpersonenbildung in interdisziplinärer Perspektive: Ergebnisse des Forschungsprojekts Connected Teacher Education},
year = {2025},
pages = {173--185},
publisher = {Springer Fachmedien Wiesbaden},
address = {Wiesbaden},
url = {https://doi.org/10.1007/978-3-658-45088-5_11},
doi = {10.1007/978-3-658-45088-5_11}
}
Abstract: Algorithmisches Denken ist eine wichtige Voraussetzung für das Erlernen des Programmierens. Die abstrakte Natur und das tatsächliche Verständnis von Algorithmen spielen aber oft eine untergeordnete Rolle, wenn das Programmieren mithilfe spezifischer Umgebungen und Programmiersprachen erlernt wird. Anstatt zugrunde liegende Muster zu erkennen, erwerben die Lernenden oft hauptsächlich die Fähigkeit, konkrete Probleme in einer konkreten Programmiersprache zu lösen. Es stellt sich daher die Frage, wie abstrakte Algorithmen losgelöst von einer konkreten Programmiersprache erfahrbar und verständlich gemacht werden können. Robot Karol ist eine in Schulen oft verwendete Lernumgebung, in der ein Roboter durch einen kleinen Befehlssatz programmier- und steuerbar ist. In Anlehnung an bereits existierende 2D-Versionen von Robot Karol sollen bei dem in diesem Beitrag vorgestellten Ansatz die Nutzer:innen in einer mit Virtual Reality (VR) umgesetzten Umgebung verschiedene Aufgaben im Stil des Robot Karol erfüllen. Der von uns gewählte VR-Ansatz beinhaltet im Gegensatz zu existierenden Desktop-Versionen keine syntaktische Komponente. Die Aktionen der Nutzer:innen werden direkt umgesetzt und visualisiert, wodurch die wesentlichen algorithmischen Abläufe greifbarer gemacht werden sollen.
Timo Menzel, Erik Wolf, Stephan Wenninger, Niklas Spinczyk, Lena Holderrieth, Carolin Wienrich, Ulrich Schwanecke, Marc Erich Latoschik, Mario Botsch,
Avatars for the masses: smartphone-based reconstruction of humans for virtual reality
, In
Frontiers in Virtual Reality
, Vol.
6
.
2025.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{menzel2025avatars,
title = {Avatars for the masses: smartphone-based reconstruction of humans for virtual reality},
author = {Menzel, Timo and Wolf, Erik and Wenninger, Stephan and Spinczyk, Niklas and Holderrieth, Lena and Wienrich, Carolin and Schwanecke, Ulrich and Latoschik, Marc Erich and Botsch, Mario},
journal = {Frontiers in Virtual Reality},
year = {2025},
volume = {6},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1583474/pdf},
doi = {10.3389/frvir.2025.1583474}
}
Christian Merz, Marc Erich Latoschik, Carolin Wienrich,
Breaking Immersion Barriers: Smartphone Viability in Asymmetric Virtual Collaboration
, In
CHI 25 Conference on Human Factors in Computing Systems Extended Abstracts
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2025smartphone,
title = {Breaking Immersion Barriers: Smartphone Viability in Asymmetric Virtual Collaboration},
author = {Merz, Christian and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {CHI 25 Conference on Human Factors in Computing Systems Extended Abstracts},
year = {2025},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-chilbw-smartphone-asymmetry.pdf},
doi = {10.1145/3706599.3719814}
}
Abstract: As demand grows for cross-device collaboration in virtual environments, users increasingly join shared spaces on varying hardware ranging from head-mounted displays (HMDs) to everyday lower-immersion smartphones. This paper investigates smartphone-based participation compared with fully immersive VR in dyadic asymmetric interaction.
One participant joins via an HMD, while the other uses a smartphone. Through a collaborative sorting task, we evaluate self-perception (presence, embodiment), other-perception (co-presence, social presence, avatar plausibility), and task-perception (task load, enjoyment). We compare our results with previous work that examined VR-VR and desktop-VR pairings. The results show that smartphone users report lower self-perception than VR users. However, other-perception remains comparable to immersive setups.
Interestingly, smartphone participants experience lower mental demand. It appears that device familiarity and intuitive interfaces can compensate for reduced immersion. Overall, our work highlights the viability of smartphones for asymmetric interaction, offering high accessibility without impairing social interaction.
Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
31
(
5)
, pp. 3613-3622
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kullmann2025coverage,
title = {Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness},
author = {Kullmann, Peter and Schell, Theresa and Menzel, Timo and Botsch, Mario and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
volume = {31},
number = {5},
pages = {3613-3622},
url = {https://ieeexplore.ieee.org/document/10919002},
doi = {10.1109/TVCG.2025.3549887}
}
Abstract: Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.
Lena Holderrieth, Erik Wolf, Marie Luisa Fiedler, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Do You Feel Better? The Impact of Embodying Photorealistic Avatars with Ideal Body Weight on Attractiveness and Self-Esteem in Virtual Reality
, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)
, pp. 1404-1405
.
IEEE Computer Science
, 2025.
Best Poster 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{holderrieth2025better,
title = {Do You Feel Better? The Impact of Embodying Photorealistic Avatars with Ideal Body Weight on Attractiveness and Self-Esteem in Virtual Reality},
author = {Holderrieth, Lena and Wolf, Erik and Fiedler, Marie Luisa and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
year = {2025},
pages = {1404-1405},
publisher = {IEEE Computer Science},
note = {Best Poster 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-holderrieth-do-you-feel-better.pdf},
doi = {10.1109/VRW66409.2025.00348}
}
Abstract: Body weight issues can manifest in low self-esteem through a negative body image or the feeling of unattractiveness. To explore potential interventions, the pilot study examined whether embodying a photorealistically personalized avatar with enhanced attractiveness affects self-esteem. Participants in the manipulation group adjusted their avatar's body weight to their self-defined ideal, while a control group used unmodified avatars. To confirm the manipulation, we measured the perceived avatars' attractiveness. Results showed that participants found avatars at their ideal weight significantly more attractive, confirming an effective manipulation. Further, the ideal weight group showed a clear trend towards higher self-esteem post-exposure.
Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Does Task Matter? Task-Dependent Effects of Cross-Device Collaboration on Social Presence
, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)
.
IEEE Computer Science
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2025taskasymmetry,
title = {Does Task Matter? Task-Dependent Effects of Cross-Device Collaboration on Social Presence},
author = {Merz, Christian and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
year = {2025},
publisher = {IEEE Computer Science},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevrw-task-cross-device.pdf},
doi = {10.1109/VRW66409.2025.00116}
}
Abstract: In this work, we explored asymmetric collaboration under two distinct tasks: collaborative sorting and conversational talking tasks. We answer the research question of how different tasks impact the user experience in asymmetric interaction. Our mixed design compared one symmetric and one asymmetric interaction and two tasks, assessing self-perception (presence, embodiment), other-perception (co-presence, social presence, plausibility), and task perception (task load, enjoyment). 52 participants collaborated in dyads on the two tasks, either using head-mounted displays (HMDs) or one participant using an HMD and the other a desktop setup. Results indicate that differences in social presence diminished or disappeared during the purely conversational talking task in comparison to the sorting task. This indicates that differences in how we perceive a social interaction, which is caused by asymmetric interaction, only occur during specific use cases. These findings underscore the critical role of task characteristics in shaping users’ social XR experiences and highlight that asymmetric collaboration can be effective across different use cases and is even on par with symmetric interaction during conversations.
Marie Luisa Fiedler, Arne Bürger, Sabrina Mittermeier, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Evaluating VR and AR Mirror Exposure for Anorexia Nervosa Therapy in Adolescents: A Method Proposal for Understanding Stakeholder Perspectives
, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)
, pp. 965-970
.
IEEE Computer Science
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fiedler2025evaluating,
title = {Evaluating VR and AR Mirror Exposure for Anorexia Nervosa Therapy in Adolescents: A Method Proposal for Understanding Stakeholder Perspectives},
author = {Fiedler, Marie Luisa and Bürger, Arne and Mittermeier, Sabrina and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
year = {2025},
pages = {965-970},
publisher = {IEEE Computer Science},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-fiedler-stakeholder-focus-group-proposal.pdf}
}
Abstract: Body image distortions in anorexia nervosa pose significant therapeutic challenges, requiring innovative interventions. Virtual Reality (VR) and Augmented Reality (AR) technologies offer promising solutions, yet stakeholder preferences, from therapists and patients, remain unexplored. This methodological proposal outlines focus groups to compare VR and AR mirror exposures using personalized and body-weight-modifiable avatars in anorexia nervosa therapy. Therapists will evaluate therapeutic potential, risks, and practicality, while adolescent patients will assess comfort, stress responses, and usability. The findings aim to advance the user-centered integration of VR and AR into anorexia nervosa therapy, addressing critical treatment gaps.
Peter Kullmann, Theresa Schell, Mario Botsch, Marc Erich Latoschik,
Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality
, In
Frontiers in Virtual Reality
, Vol.
6
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kullmann2025eyetoeye,
title = {Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality},
author = {Kullmann, Peter and Schell, Theresa and Botsch, Mario and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2025},
volume = {6},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1594350},
doi = {10.3389/frvir.2025.1594350}
}
Abstract: In co-located extended reality (XR) experiences, headsets occlude their wearers’ facial expressions, impeding natural conversation. We introduce two techniques to mitigate this using off-the-shelf hardware: compositing a view of a personalized avatar behind the visor (“see-through visor”) and reducing the headset’s visibility and showing the avatar’s head (“head substitution”). We evaluated them in a repeated-measures dyadic study (N = 25) that indicated promising effects. Collaboration with a confederate with our techniques, compared to a no-avatar baseline, resulted in quicker consensus in a judgment task and enhanced perceived mutual understanding. However, the avatar was also rated and commented on as uncanny, though participant comments indicate tolerance for avatar uncanniness since they restore gaze utility. Furthermore, performance in an executive task deteriorated in the presence of our techniques, indicating that our implementation drew participants’ attention to their partner’s avatar and away from the task. We suggest giving users agency over how these techniques are applied and recommend using the same representation across interaction partners to avoid power imbalances.
Samantha Monty, Dennis Alexander Mevißen, Marc Erich Latoschik,
Improving Mid-Air Sketching in Room-Scale Virtual Reality with Dynamic Color-to-Depth and Opacity Cues
, In
IEEE Transactions on Visualization and Computer Graphics
.
2025.
To be published.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{monty2025improving,
title = {Improving Mid-Air Sketching in Room-Scale Virtual Reality with Dynamic Color-to-Depth and Opacity Cues},
author = {Monty, Samantha and Mevißen, Dennis Alexander and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
note = {To be published.},
url = {}
}
Abstract: Immersive 3D mid-air sketching systems liberate users from the confines of traditional 2D sketching canvases. However, complications from perceptual challenges in Virtual Reality (VR), combined with the ergonomic and cognitive challenges of sketching in all three dimensions in mid-air lower the accuracy and aesthetic quality of 3D sketches. This paper explores how color-to-depth and opacity cues support users to create and perceive freehand, 3D strokes in room-scale sketching, unlocking a full 360° of freedom for creation. We implemented three graphic depth shader cues modifying the (1) alpha, (2) hue, and (3) value levels of a single color to dynamically adjust the color and transparency of meshes relative to their depth from the user. We investigated how these depth cues influence sketch efficiency, sketch quality, and total sketch experience with 24 participants in a comparative, counterbalanced, 4 x 1 within-subjects user study. First, with our graphic depth shader cues we could successfully transfer results of prior research in seated sketching tasks to room-scale scenarios. Our color-to-depth cues improved the similarity of sketches to target models. This highlights the usefulness of the color-to-depth approach even for the increased range of motion and depth in room-scale sketching. Second, our shaders assisted participants to complete tasks faster, spend a greater percentage of task time sketching, reduced the feeling of mental tiredness and improved the feeling of sketch efficiency in room-scale sketching. We discuss these findings and share our insights and conclusions to advance the research on improving spatial cognition in immersive sketching systems.
Kristina Förster, Rebecca Hein, Carolin Wienrich, Marc Erich Latoschik, Silke Grafe,
Interdisziplinäre Entwicklung Eines Konzepts für die Weiterbildung von Dozierenden in der Lehrpersonenbildung Unter Nutzung von Social Virtual Reality
, In
Digitale Medien in Lehr-Lern-Konzepten der Lehrpersonenbildung in interdisziplinärer Perspektive: Ergebnisse des Forschungsprojekts Connected Teacher Education
Angelika Füting-Lippert, Maria Eisenmann, Silke Grafe, Hans-Stefan Siller, Thomas Trefzger (Eds.),
, pp. 159-172
.
Wiesbaden
:
Springer Fachmedien Wiesbaden
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inbook{forster:2025a,
title = {Interdisziplinäre Entwicklung Eines Konzepts für die Weiterbildung von Dozierenden in der Lehrpersonenbildung Unter Nutzung von Social Virtual Reality},
author = {Förster, Kristina and Hein, Rebecca and Wienrich, Carolin and Latoschik, Marc Erich and Grafe, Silke},
editor = {Füting-Lippert, Angelika and Eisenmann, Maria and Grafe, Silke and Siller, Hans-Stefan and Trefzger, Thomas},
booktitle = {Digitale Medien in Lehr-Lern-Konzepten der Lehrpersonenbildung in interdisziplinärer Perspektive: Ergebnisse des Forschungsprojekts Connected Teacher Education},
year = {2025},
pages = {159--172},
publisher = {Springer Fachmedien Wiesbaden},
address = {Wiesbaden},
url = {https://doi.org/10.1007/978-3-658-45088-5_10},
doi = {10.1007/978-3-658-45088-5_10}
}
Abstract: Angesichts der Entwicklungen im Medienbereich und der Globalisierung ergeben sich neue Aufgaben für Schule und Unterricht und damit auch für die Förderung medienpädagogischer und interkultureller Kompetenz von (angehenden) Lehrpersonen. Vor diesem Hintergrund wird in diesem Beitrag die interdisziplinäre Entwicklung eines Weiterbildungskonzepts für Dozierende in der Lehrpersonenbildung unter Nutzung von Social Virtual Reality (SVR) vorgestellt. Bedeutsame pädagogische Vorgehensweisen des Konzepts sind im Sinne einer Handlungsorientierung die Auseinandersetzung mit komplexen Aufgabenstellungen, Rollenspiele, Dialoge und Portfolioarbeit. Die technische Weiterentwicklung der SVR-Lernumgebung beinhaltet die Erstellung neuartiger Aufgaben sowie die Integration stilisierter Avatare und virtueller Objekte. Die Zielerreichung des Weiterbildungskonzepts und die Wirkung der Designelemente wurden empirisch untersucht. Aus hochschuldidaktischer bzw. medienpädagogischer Sicht zeigen die Ergebnisse u. a. positive Entwicklungen der interkulturellen Kompetenz von Dozierenden. Konsequenzen für zukünftige Forschung und Weiterbildungspraxis werden abschließend diskutiert.
Franziska Westermeier, Chandni Murmu, Kristopher Kohm, Christopher Pagano, Carolin Wienrich, Sabarish V. Babu, Marc Erich Latoschik,
Interpupillary to Inter-Camera Distance of Video See-Through AR and its Impact on Depth Perception
, In
Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)
, pp. 537-547
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{westermeier2025interpupillary,
title = {Interpupillary to Inter-Camera Distance of Video See-Through AR and its Impact on Depth Perception},
author = {Westermeier, Franziska and Murmu, Chandni and Kohm, Kristopher and Pagano, Christopher and Wienrich, Carolin and V. Babu, Sabarish and Latoschik, Marc Erich},
booktitle = {Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)},
year = {2025},
pages = {537-547},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-ipd-icd.pdf},
doi = {10.1109/VR59515.2025.00077}
}
Abstract: Interpupillary distance (IPD) is a crucial characteristic of head-mounted displays (HMDs) because it defines an important property for generating a stereoscopic parallax, which is essential for correct depth perception. This is why contemporary HMDs offer adjustable lenses to adapt to users' individual IPDs.
However, today's Video See-Through Augmented Reality (VST AR) HMDs use fixed camera placements to reconstruct the stereoscopic view of a user's environment.
This leads to a potential mismatch between individual IPD settings and the fixed Inter-Camera Distances (ICD), which in turn can lead to perceptual incongruencies, limiting the usability and potentially the applicability of VST AR in depth-sensitive use cases. To investigate this incongruency between IPD and ICD, we conducted a 2x3 mixed-factor design empirical evaluation using a near-field, open-loop reaching task comparing distance judgments of Virtual Reality (VR) and VST AR. We also explored improvements in reaching performance via perceptual calibration by incorporating a feedback phase between pre- and post-phase conditions, with a particular focus on the influence of IPD-ICD differences. Our Linear Mixed Model (LMM) analysis showed a significant difference between VR and VST AR, a significant effect of IPD-ICD mismatch, as well as a combined effect of both factors. This novel insight and its consequences are discussed specifically for depth perception tasks in AR, eXtended Reality (XR), and potential use cases.
Lukas Schach, Christian Rack, Ryan P. McMahan, Marc Erich Latoschik,
Motion-Based User Identification across XR and Metaverse Applications by Deep Classification and Similarity Learning
.
2025.
[BibTeX]
[Download]
[BibSonomy]
@misc{schach2025motionbaseduseridentificationxr,
title = {Motion-Based User Identification across XR and Metaverse Applications by Deep Classification and Similarity Learning},
author = {Schach, Lukas and Rack, Christian and McMahan, Ryan P. and Latoschik, Marc Erich},
year = {2025},
url = {https://arxiv.org/abs/2509.08539}
}
Simon Seibt, Bastian Kuth, Bartosz von Rymon Lipinski, Thomas Chang, Marc Erich Latoschik,
Multidimensional image morphing-fast image-based rendering of open 3D and VR environments
, In
Virtual Reality & Intelligent Hardware
, Vol.
7
(
2)
, pp. 155-172
.
Elsevier
, 2025.
[BibTeX]
[Download]
[BibSonomy]
@article{seibt2025multidimensional,
title = {Multidimensional image morphing-fast image-based rendering of open 3D and VR environments},
author = {Seibt, Simon and Kuth, Bastian and von Rymon Lipinski, Bartosz and Chang, Thomas and Latoschik, Marc Erich},
journal = {Virtual Reality & Intelligent Hardware},
year = {2025},
volume = {7},
number = {2},
pages = {155--172},
publisher = {Elsevier},
url = {}
}
Philipp Krop, David Obremski, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich,
My Co-worker ChatGPT: Development of an XR Application for Embodied Artificial Intelligence in Work Environments
, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)
.
IEEE Computer Science
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{krop2025coworker,
title = {My Co-worker ChatGPT: Development of an XR Application for Embodied Artificial Intelligence in Work Environments},
author = {Krop, Philipp and Obremski, David and Carolus, Astrid and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
year = {2025},
publisher = {IEEE Computer Science},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-demo-gpt-preprint.pdf},
doi = {10.1109/VRW66409.2025.00468}
}
Abstract: With recent developments in spatial computing, work contexts might shift to augmented reality. Embodied AI - virtual conversa- tional agents backed by AI systems - have the potential to enhance these contexts and open up more communication channels than just text. To support knowledge transfer from virtual agent research to the general populace, we developed My CoWorker ChatGPT - an interactive demo where employees can try out various embodied AIs in a virtual office or their own using augmented reality. We use state-of-the-art speech synthesis and body-scanning technology to create believable and trustworthy AI assistants. The demo was shown at multiple events throughout Germany, where it was well received and sparked fruitful conversations about the possibilities of embodied AI in work contexts.
Smi Hinterreiter, Martin Wessel, Fabian Schliski, Isao Echizen, Marc Erich Latoschik, Timo Spinde,
NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback
, In
Proceedings of the International AAAI Conference on Web and Social Media
, Vol.
19
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{hinterreiter2025newsunfold,
title = {NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback},
author = {Hinterreiter, Smi and Wessel, Martin and Schliski, Fabian and Echizen, Isao and Latoschik, Marc Erich and Spinde, Timo},
journal = {Proceedings of the International AAAI Conference on Web and Social Media},
year = {2025},
volume = {19},
url = {https://ojs.aaai.org/index.php/ICWSM/article/view/35847}
}
Abstract: Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.
Sebastian Oberdörfer, Melina Heinisch, Tobias Mühling, Verena Schreiner, Sarah König, Marc Erich Latoschik,
Ready for VR? Assessing VR Competence and Exploring the Role of Human Abilities and Characteristics
, In
Frontiers in Virtual Reality
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{oberdorfer2025ready,
title = {Ready for VR? Assessing VR Competence and Exploring the Role of Human Abilities and Characteristics},
author = {Oberdörfer, Sebastian and Heinisch, Melina and Mühling, Tobias and Schreiner, Verena and König, Sarah and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2025},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2025-oberdoerfer-frontiers-vr-competence-preprint.pdf}
}
Abstract: The use of VR for educational purposes provides the opportunity for integrating VR applications into assessments or graded examinations. Interacting with an VR environment requires specific human abilities, thus suggesting the existence of a VR competence. With regard to the emerging field of VR-based examinations, this VR competence might influence a candidate's final grade and hence should be taken into account. In this paper, we proposed and developed a VR competence assessment application. The application features eight individual challenges that are based on generic 3D interaction techniques. In a pilot study, we measured the performance of 18 users. By identifying significant correlations between VR competence score, previous VR experience and theoretically-grounded contributing human abilities and characteristics, we provide first evidence that our VR competence assessment is effective. In addition, we provide first data that a specific VR competence exists. Our analyses further revealed that mainly spatial ability but also immersive tendency correlated with VR competence scores. These insights not only allow educators and researchers to assess and potentially equalize the VR competence level of their subjects, but also help designers to provide effective tutorials for first-time VR users.
Marie Luisa Fiedler, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Self-Similarity Beats Motor Control in Augmented Reality Body Weight Perception
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
31
(
5)
.
2025.
Honorable Mention 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{fiedler2025selfsimilarity,
title = {Self-Similarity Beats Motor Control in Augmented Reality Body Weight Perception},
author = {Fiedler, Marie Luisa and Botsch, Mario and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
volume = {31},
number = {5},
note = {Honorable Mention 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-fiedler-self-similarity-beats-motor-control.pdf},
doi = {10.1109/TVCG.2025.3549851}
}
Abstract: This paper investigates if and how self-similarity and having motor control impact sense of embodiment, self-identification, and body weight perception in Augmented Reality (AR). We conducted a 2x2 mixed design experiment involving 60 participants who interacted with either synchronously moving virtual humans or independently moving ones, each with self-similar or generic appearances, across two consecutive AR sessions. Participants evaluated their sense of embodiment, self-identification, and body weight perception of the virtual human. Our results show that self-similarity significantly enhanced sense of embodiment, self-identification, and the accuracy of body weight estimates with the virtual human. However, the effects of having motor control over the virtual human movements were notably weaker in these measures than in similar VR studies. Further analysis indicated that not only the virtual human itself but also the participants' body weight, self-esteem, and body shape concerns predict body weight estimates across all conditions. Our work advances the understanding of virtual human body weight perception in AR systems, emphasizing the importance of factors such as coherence with the real-world environment.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Carolin Wienrich, Marc Erich Latoschik,
Social Virtual Reality für Inter- und Transkulturelles Lernen und Lehren im Englischunterricht
, In
Digitale Medien in Lehr-Lern-Konzepten der Lehrpersonenbildung in interdisziplinärer Perspektive: Ergebnisse des Forschungsprojekts Connected Teacher Education
Angelika Füting-Lippert, Maria Eisenmann, Silke Grafe, Hans-Stefan Siller, Thomas Trefzger (Eds.),
, pp. 141-158
.
Wiesbaden
:
Springer Fachmedien Wiesbaden
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inbook{hein:2025a,
title = {Social Virtual Reality für Inter- und Transkulturelles Lernen und Lehren im Englischunterricht},
author = {Hein, Rebecca and Steinbock, Jeanine and Eisenmann, Maria and Wienrich, Carolin and Latoschik, Marc Erich},
editor = {Füting-Lippert, Angelika and Eisenmann, Maria and Grafe, Silke and Siller, Hans-Stefan and Trefzger, Thomas},
booktitle = {Digitale Medien in Lehr-Lern-Konzepten der Lehrpersonenbildung in interdisziplinärer Perspektive: Ergebnisse des Forschungsprojekts Connected Teacher Education},
year = {2025},
pages = {141--158},
publisher = {Springer Fachmedien Wiesbaden},
address = {Wiesbaden},
url = {https://doi.org/10.1007/978-3-658-45088-5_9},
doi = {10.1007/978-3-658-45088-5_9}
}
Abstract: Für die Integration von Social Virtual Reality (SVR) in den Englischunterricht zur Förderung inter- und transkultureller Kompetenzen fehlen ein umfassender Rahmen, empirische Belege, Bewertungsmethoden und Überlegungen zur Skalierbarkeit. Das limitiert den effektiven und evidenzbasierten Einsatz von SVR in der Bildung und schränkt potenziell seine Fähigkeit ein, den Aufbau von inter- und transkultureller Kompetenz zu unterstützen und Diskriminierung, Stereotypenbildung und rassistisches Denken beim Sprachenlernen zu bekämpfen. Das Teilprojekt von CoTeach leistet hier einen wichtigen Beitrag, indem ein Seminar für angehende Lehrpersonen entwickelt und evaluiert wurde, das die Teilnehmenden über Potenziale und Herausforderungen von SVR im inter- und transkulturellen Kompetenzaufbau informierte. Darüber hinaus ermöglicht das Seminarkonzept das Einüben wichtiger Handlungskompetenzen im Umgang mit SVR, sodass ein Übertrag für spätere Unterrichtsplanung gelingen kann. Der interdisziplinäre Ansatz hebt die Synergie zwischen inter- und transkultureller Bildung und Human--Computer Interaction (HCI) hervor und zeigt die Wichtigkeit innovativer Lehrkonzepte mithilfe von digitalen Tools.
Martin Weiß, Philipp Krop, Lukas Treml, Elias Neuser, Mario Botsch, Martin J. Herrmann, Marc Erich Latoschik, Grit Hein,
The Buffering of Autonomic Fear Responses Is Moderated by the Characteristics of a Virtual Character
, In
Computers in Human Behavior
, Vol.
168
, p. 108657
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{weiss2025buffering,
title = {The Buffering of Autonomic Fear Responses Is Moderated by the Characteristics of a Virtual Character},
author = {Weiß, Martin and Krop, Philipp and Treml, Lukas and Neuser, Elias and Botsch, Mario and Herrmann, Martin J. and Latoschik, Marc Erich and Hein, Grit},
journal = {Computers in Human Behavior},
year = {2025},
volume = {168},
pages = {108657},
url = {https://www.sciencedirect.com/science/article/pii/S0747563225001049},
doi = {10.1016/j.chb.2025.108657}
}
Abstract: The presence of a conspecific can mitigate autonomic responses to aversive stimuli, an effect known as social buffering. Nowadays, social interactions are often virtual, yet virtual social buffering effects remain poorly understood. This work presents five studies that systematically test the conditions required for virtual social buffering. We assessed participants’ emotion ratings and skin conductance responses when they were presented with neutral or fear-inducing sounds alone or in the presence of a virtual character with a varying extent of human-like features (virtual female or male person, wooden puppet, point cloud). The characters were presented using the same social framing, i.e., had the same social meaning. Our results show a significant reduction in SCR responses to fear-inducing sounds in the presence of a virtual character, but only if it is embodied as a woman or a wooden puppet. Clarifying the role of the social frame, a control study showed no social buffering effects if the wooden puppet was presented without the social frame. Our results show that the characteristics of a virtual character significantly moderate the social buffering of fear responses. Our findings shed light on the nature of virtual social buffering effects and are relevant for developing virtual applications for clinical and societal interventions.
Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik,
The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR
, In
IEEE Transactions on Visualization and Computer Graphics
.
2025.
IEEE ISMAR Best Paper Award Honorable Mention 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{merz2025impact,
title = {The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR},
author = {Merz, Christian and Krome, Niklas and Wienrich, Carolin and Kopp, Stefan and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
note = {IEEE ISMAR Best Paper Award Honorable Mention 🏆},
url = {},
doi = {10.1109/TVCG.2025.3616864}
}
Abstract: This study explores how people interact in dyadic social eXtended Reality (XR), focusing on two main factors: the animation type of a conversation partner’s avatar and how immersed the user feels in the virtual environment. Specifically, we investigate how 1) idle behavior, 2) AI-generated gestures, and 3) motion-captured movements from a confederate (a controlled partner in the study) influence the quality of conversation and how that partner is perceived. We examined these effects in both symmetric interactions (where both participants use VR headsets and controllers) and asymmetric interactions (where one participant uses a desktop setup). We developed a social XR platform that supports asymmetric device configurations to provide varying levels of immersion. The platform also supports a modular avatar animation system providing idle behavior, real-time AI-generated co-speech gestures, and full-body motion capture. Using a 2×3 mixed design with 39 participants, we measured users’ sense of spatial presence, their perception of the confederate, and the overall conversation quality. Our results show that users who were more immersed felt a stronger sense of presence and viewed their partner as more human-like and believable. Surprisingly, however, the type of avatar animation did not significantly affect conversation quality or how the partner was perceived. Participants often reported focusing more on what was said rather than how the avatar moved.
Andrea Zimmerer, Lydia Bartels, Marc Erich Latoschik,
The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application
, In
2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 1031-1041
.
IEEE Computer Society
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{zimmerer2025feedback,
title = {The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application},
author = {Zimmerer, Andrea and Bartels, Lydia and Latoschik, Marc Erich},
booktitle = {2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2025},
pages = {1031-1041},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ismar-feedback-from-a-virtual-coach-in-vr-exercise.pdf},
doi = {10.1109/ISMAR67309.2025.00110}
}
Abstract: Virtual reality (VR) exercise applications are promising tools, e.g., for at-home training and rehabilitation. However, existing applications vary significantly in key design choices such as environments, embodiment, and virtual coaching, making it difficult to derive clear design guidelines. A prominent design choice is the use of embodied virtual coaches, which guide user interaction and provide feedback. In a user study with 76 participants, we investigated how different levels of performance specificity in feedback from an embodied virtual coach affect intermediate factors, such as VR experience, motivation, and coach perception. Participants performed lower-body movement exercises, i.e., Leg Raises and Knee Extensions, commonly used in knee rehabilitation. We found that highly performance-specific feedback led to higher scores compared to medium specificity for perceived realism, as well as the anthropomorphism and sympathy of the virtual coach, but did not affect motivation. Based on our findings, we propose the design suggestion to include precise, performance-specific details when creating feedback for a virtual coach. We observed a descriptive pattern of higher scores in the low specificity condition compared to the medium condition on most measures, which raises the possibility that less specific feedback may, in some cases, be perceived more positively than moderately specific feedback. These findings provide valuable insights into how design choices impact relevant intermediate factors that are crucial for maximizing both workout effectiveness and the quality of the virtual coaching experience.
Andreas Halbig, Marc Erich Latoschik,
The Interwoven Nature of Spatial Presence and Virtual Embodiment: A Comprehensive Perspective
, In
Frontiers in Virtual Reality
, Vol.
6
.
2025.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{halbig-interwoven,
title = {The Interwoven Nature of Spatial Presence and Virtual Embodiment: A Comprehensive Perspective},
author = {Halbig, Andreas and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2025},
volume = {6},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1616662/full},
doi = {10.3389/frvir.2025.1616662}
}
Joanna Grause, Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
The Stability of Plausibility and Presence in Claustrophobic Virtual Reality Exposure Therapy
, In
Proceedings of the Mensch Und Computer 2025
, p. 181–192
.
New York, NY, USA
:
Association for Computing Machinery
, 2025.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{noauthororeditor2025stability,
title = {The Stability of Plausibility and Presence in Claustrophobic Virtual Reality Exposure Therapy},
author = {Grause, Joanna and Brübach, Larissa and Westermeier, Franziska and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {Proceedings of the Mensch Und Computer 2025},
year = {2025},
pages = {181–192},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3743049.3743068},
doi = {10.1145/3743049.3743068}
}
Jonathan Tschanter, Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Towards Understanding Harassment in Social Virtual Reality: A Study Design on the Impact of Avatar Self-Similarity
, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)
.
IEEE Computer Science
, 2025.
IDEATExR Best Paper 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{tschanter2025harassment,
title = {Towards Understanding Harassment in Social Virtual Reality: A Study Design on the Impact of Avatar Self-Similarity},
author = {Tschanter, Jonathan and Merz, Christian and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
year = {2025},
publisher = {IEEE Computer Science},
note = {IDEATExR Best Paper 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevrw-towards-understanding-harassment-in-social-virtual-reality.pdf}
}
Abstract: In social virtual reality (VR), harassment persists as a pervasive and critical issue. Prior work emphasizes its perceived realness and emotional impact. However, the influence of avatar design, particularly the role of self-similarity, remains underexplored. Self-similar avatars can enhance user identification and engagement, yet potentially intensify the psychological and physiological effects of harassment. Existing studies often rely on interviews or user-generated content, lacking systematic analysis and controlled comparisons. To address these gaps, we present a process for creating realistic VR harassment scenarios. We built a scenario based on existing literature and validated it with expert reviews and user feedback. We propose a 2 x 2 between-subjects design to systematically examine users' emotional and physiological states, their identification with avatars, and the effects of avatar self-similarity. The study design will deepen the understanding of harassment dynamics in VR. Additionally, it can provide actionable insights for designing safer, more inclusive virtual environments that promote user well-being and foster equitable communities.
Christian Merz, Lukas Schach, Marie Luisa Fiedler, Jean-Luc Lugrin, Carolin Wienrich, Marc Erich Latoschik,
Unobtrusive In-Situ Measurement of Behavior Change by Deep Metric Similarity Learning of Motion Patterns
.
2025.
[BibTeX]
[Download]
[BibSonomy]
@misc{merz2025unobtrusiveinsitumeasurementbehavior,
title = {Unobtrusive In-Situ Measurement of Behavior Change by Deep Metric Similarity Learning of Motion Patterns},
author = {Merz, Christian and Schach, Lukas and Fiedler, Marie Luisa and Lugrin, Jean-Luc and Wienrich, Carolin and Latoschik, Marc Erich},
year = {2025},
url = {https://arxiv.org/abs/2509.04174}
}
Kathrin Gemesi, Nina Döllinger, Natascha-Alexandra Weinberger, Erik Wolf, David Mal, Sebastian Keppler, Stephan Wenninger, Emily Bader, Carolin Wienrich, Claudia Luck-Sikorski, Marc Erich Latoschik, Johann Habakuk Israel, Mario Botsch, Christina Holzapfel,
Virtual body image exercises for people with obesity -- results on eating behavior and body perception of the ViTraS pilot study
, In
BMC Medical Informatics and Decision Making
, Vol.
25
(
1)
, p. 176
.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{Gemesi2025,
title = {Virtual body image exercises for people with obesity -- results on eating behavior and body perception of the ViTraS pilot study},
author = {Gemesi, Kathrin and Döllinger, Nina and Weinberger, Natascha-Alexandra and Wolf, Erik and Mal, David and Keppler, Sebastian and Wenninger, Stephan and Bader, Emily and Wienrich, Carolin and Luck-Sikorski, Claudia and Latoschik, Marc Erich and Israel, Johann Habakuk and Botsch, Mario and Holzapfel, Christina},
journal = {BMC Medical Informatics and Decision Making},
year = {2025},
volume = {25},
number = {1},
pages = {176},
url = {https://doi.org/10.1186/s12911-025-02993-x},
doi = {10.1186/s12911-025-02993-x}
}
Abstract: A negative body image can have an impact on developing and maintaining obesity. Using virtual reality (VR) to conduct cognitive behavioral therapy (CBT) is an innovative approach to treat people with obesity. This multicenter non-randomized pilot study examined the feasibility and the effect on eating behavior and body perception of a newly developed VR system to conduct body image exercises.
Larissa Brübach, Deniz Celikhan, Lennard Rüffert, Franziska Westermeier, Marc Erich Latoschik, Carolin Wienrich,
When Fear Overshadows Perceived Plausibility: The Influence of Incongruencies on Acrophobia in VR
, In
Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)
.
IEEE Computer Science
, 2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@proceedings{brubach2025overshadows,
title = {When Fear Overshadows Perceived Plausibility: The Influence of Incongruencies on Acrophobia in VR},
author = {Brübach, Larissa and Celikhan, Deniz and Rüffert, Lennard and Westermeier, Franziska and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)},
year = {2025},
publisher = {IEEE Computer Science},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-bruebach-height-and-plausibility-preprint.pdf},
doi = {10.1109/VR59515.2025.00089}
}
Abstract: Virtual Reality Exposure Therapy (VRET) has become an effective, customizable, and affordable treatment for various psychological and physiological disorders. Specifically, it is used to treat specific anxiety disorders, such as acrophobia or arachnophobia, for decades. However, to ensure a positive outcome for patients, we must understand and control the effects potentially caused by the technology and medium of Virtual Reality (VR) itself. This article specifically investigates the impact of the Plausibility illusion (Psi), as one of the two theorized presence components, on the fear of heights. In two experiments, 30 participants each experienced two different heights with congruent and incongruent object behaviors in a 2 x 2 within-subject design. Results show that the strength of the congruence manipulation plays a significant role. Only when incongruencies are strong enough will they be recognized by users, specifically in high fear conditions, as triggered by exposure to increased heights. If incongruencies are too subtle, they seem to be overshadowed by the stronger fear reactions. Our evidence contributes to recent theories of VR effects and emphasizes the importance of understanding and controlling factors potentially assumed to be incidental, specifically during VRET designs. Incongruencies should be controlled so that they do not have an unwanted influence on the patient's fear response.
2024
Olaf Clausen, Martin Mišiak, Arnulph Fuhrmann, Ricardo Marroquim, Marc Erich Latoschik,
A Practical Real-Time Model for Diffraction on Rough Surfaces
, In
Journal of Computer Graphics Techniques
, Vol.
13
(
1)
, pp. 1-27
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{clausen2024practical,
title = {A Practical Real-Time Model for Diffraction on Rough Surfaces},
author = {Clausen, Olaf and Mišiak, Martin and Fuhrmann, Arnulph and Marroquim, Ricardo and Latoschik, Marc Erich},
journal = {Journal of Computer Graphics Techniques},
year = {2024},
volume = {13},
number = {1},
pages = {1-27},
url = {https://jcgt.org/published/0013/01/01/}
}
Abstract: Wave optics phenomena have a significant impact on the visual appearance of rough conductive surfaces even when illuminated with partially coherent light. Recent models address these phenomena, but none is real-time capable due to the complexity of the underlying physics equations. We provide a practical real-time model, building on the measurements and model by Clausen et al. 2023, that approximates diffraction-induced wavelength shifts and speckle patterns with only a small computational overhead compared to the popular Cook-Torrance GGX model. Our model is suitable for Virtual Reality applications, as it contains domain-specific improvements to address the issues of aliasing and highlight disparity.
David Mal, Nina Döllinger, Erik Wolf, Stephan Wenninger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Am I the odd one? Exploring (in)congruencies in the realism of avatars and virtual others in virtual reality
, In
Frontiers in Virtual Reality
, Vol.
5
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{mal2024oddone,
title = {Am I the odd one? Exploring (in)congruencies in the realism of avatars and virtual others in virtual reality},
author = {Mal, David and Döllinger, Nina and Wolf, Erik and Wenninger, Stephan and Botsch, Mario and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2024},
volume = {5},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-frontiers-vhp-group-final.pdf},
doi = {10.3389/frvir.2024.1417066}
}
Abstract: Virtual humans play a pivotal role in social virtual environments, shaping users’ VR experiences. The diversity in available options and users’ individual preferences can result in a heterogeneous mix of appearances among a group of virtual humans. The resulting variety in higher-order anthropomorphic and realistic cues introduces multiple (in)congruencies, eventually impacting the plausibility of the experience. However, related work investigating the effects of being co-located with multiple virtual humans of different appearances remains limited. In this work, we consider the impact of (in)congruencies in the realism of a group of virtual humans, including co-located others (agents) and one’s self-representation (self-avatar), on users’ individual VR experiences. In a 2 × 3 mixed design, participants embodied either (1) a personalized realistic or (2) a customized stylized self-avatar across three consecutive VR exposures in which they were accompanied by a group of virtual others being either (1) all realistic, (2) all stylized, or (3) mixed between stylized and realistic. Our results indicate groups of virtual others of higher realism, i.e., potentially more congruent with participants’ real-world experiences and expectations, were considered more human-like, increasing the feeling of co-presence and the impression of interaction possibilities. (In)congruencies concerning the homogeneity of the group did not cause considerable effects. Furthermore, our results indicate that a self-avatar’s congruence with the participant’s real-world experiences concerning their own physical body yielded notable benefits for virtual body ownership and self-identification for realistic personalized avatars. Notably, the incongruence between a stylized self-avatar and a group of realistic virtual others resulted in diminished ratings of self-location and self-identification. This suggests that higher-order (in)congruent visual cues that are not within the ego-central referential frame of one’s (virtual) body, can have an (adverse) effect on the relationship between one’s self and body. We conclude on the implications of our findings and discuss our results within current theories of VR experiences, considering (in)congruent visual cues and their impact on the perception of virtual others, self-representation, and spatial presence
Samantha Monty, Florian Kern, Marc Erich Latoschik,
Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks
, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
.
IEEE Computer Society
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{monty2024,
title = {Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks},
author = {Monty, Samantha and Kern, Florian and Latoschik, Marc Erich},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2024},
publisher = {IEEE Computer Society},
url = {https://ieeexplore.ieee.org/document/10765456},
doi = {10.1109/ISMAR62088.2024.00041}
}
Abstract: Immersive 3D sketching systems empower users with tools to create
sketches directly in the air around themselves, in all three dimensions,
using only simple hand gestures. These sketching systems
have the potential to greatly extend the interactive capabilities
of immersive learning environments. The perceptual challenges of
Virtual Reality (VR), however, combined with the ergonomic and
cognitive challenges of creating mid-air 3D sketches reduce the effectiveness
of immersive sketching used for problem-solving, reflection,
and to capture fleeting ideas. We contribute to the understanding
of the potential challenges of mid-air sketching systems in
educational settings, where expression is valued higher than accuracy,
and sketches are used to support problem-solving and to explain
abstract concepts. We conducted an empirical study with 36
participants with different spatial abilities to investigate if the way
that people sketch in mid-air is dependent on the goal of the sketch.
We compare the technique, quality, efficiency, and experience of
participants as they create 3D mid-air sketches in three different
tasks. We examine how users approach mid-air sketching when the
sketches they create serve to convey meaning and when sketches are
merely reproductions of geometric models created by someone else.
We found that in tasks aimed at expressing personal design ideas,
between starting and ending strokes, participants moved their heads
more and their controllers at higher velocities and created strokes
in faster times than in tasks aimed at recreating 3D geometric figures.
They reported feeling less time pressure to complete sketches
but redacted a larger percentage of strokes. These findings serve to
inform the design of creative virtual environments that support reasoning
and reflection through mid-air sketching. With this work, we
aim to strengthen the power of immersive systems that support midair
3D sketching by exploiting natural user behavior to assist users
to more quickly and faithfully convey their meaning in sketches.
Kristoffer Waldow, Jonas Scholz, Martin Misiak, Arnulph Fuhrmann, Daniel Roth, Marc Erich Latoschik,
Anti-aliasing Techniques in Virtual Reality: A User Study with Perceptual Pairwise Comparison Ranking Scheme
, In
GI VR/AR Workshop
, pp. 10-18420
.
2024.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{waldow2024anti,
title = {Anti-aliasing Techniques in Virtual Reality: A User Study with Perceptual Pairwise Comparison Ranking Scheme},
author = {Waldow, Kristoffer and Scholz, Jonas and Misiak, Martin and Fuhrmann, Arnulph and Roth, Daniel and Latoschik, Marc Erich},
booktitle = {GI VR/AR Workshop},
year = {2024},
pages = {10--18420},
url = {}
}
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik,
Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
30
(
5)
, pp. 2140-2150
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{westermeier2024assessing,
title = {Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference},
author = {Westermeier, Franziska and Brübach, Larissa and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2024},
volume = {30},
number = {5},
pages = {2140 - 2150},
url = {https://ieeexplore.ieee.org/document/10458408},
doi = {10.1109/TVCG.2024.3372061}
}
Abstract: Spatial User Interfaces along the Reality-Virtuality continuum heavily depend on accurate depth perception. However, current display technologies still exhibit shortcomings in the simulation of accurate depth cues, and these shortcomings also vary between Virtual or Augmented Reality (VR, AR: eXtended Reality (XR) for short). This article compares depth perception between VR and Video See-Through (VST) AR. We developed a digital twin of an existing office room where users had to perform five depth-dependent tasks in VR and VST AR. Thirty-two participants took part in a user study using a 1×4 within-subjects design. Our results reveal higher misjudgment rates in VST AR due to conflicting depth cues between virtual and physical content. Increased head movements observed in participants were interpreted as a compensatory response to these conflicting cues. Furthermore, a longer task completion time in the VST AR condition indicates a lower task performance in VST AR. Interestingly, while participants rated the VR condition as easier and contrary to the increased misjudgments and lower performance with the VST AR display, a majority still expressed a preference for the VST AR experience. We discuss and explain these findings with the high visual dominance and referential power of the physical content in the VST AR condition, leading to a higher spatial presence and plausibility.
Murat Yalcin, Andreas Halbig, Martin Fischbach, Marc Erich Latoschik,
Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors
, In
Frontiers in Virtual Reality
, Vol.
5
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2024.1364207,
title = {Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors},
author = {Yalcin, Murat and Halbig, Andreas and Fischbach, Martin and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2024},
volume = {5},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1364207},
doi = {10.3389/frvir.2024.1364207}
}
Abstract: Cybersickness is still a prominent risk factor potentially affecting the usability of virtual reality applications. Automated real-time detection of cybersickness promises to support a better general understanding of the phenomena and to avoid and counteract its occurrence. It could be used to facilitate application optimization, that is, to systematically link potential causes (technical development and conceptual design decisions) to cybersickness in closed-loop user-centered development cycles. In addition, it could be used to monitor, warn, and hence safeguard users against any onset of cybersickness during a virtual reality exposure, especially in healthcare applications. This article presents a novel real-time-capable cybersickness detection method by deep learning of augmented physiological data. In contrast to related preliminary work, we are exploring a unique combination of mid-immersion ground truth elicitation, an unobtrusive wireless setup, and moderate training performance requirements. We developed a proof-of-concept prototype to compare (combinations of) convolutional neural networks, long short-term memory, and support vector machines with respect to detection performance. We demonstrate that the use of a conditional generative adversarial network-based data augmentation technique increases detection performance significantly and showcase the feasibility of real-time cybersickness detection in a genuine application example. Finally, a comprehensive performance analysis demonstrates that a four-layered bidirectional long short-term memory network with the developed data augmentation delivers superior performance (91.1% F1-score) for real-time cybersickness detection. To encourage replicability and reuse in future cybersickness studies, we released the code and the dataset as publicly available.
Sophia Maier, Sebastian Oberdörfer, Marc Erich Latoschik,
Ballroom Dance Training with Motion Capture and Virtual Reality
, In
Proceedings of Mensch Und Computer 2024 (MuC '24)
, pp. 617-621
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{maier2024ballroom,
title = {Ballroom Dance Training with Motion Capture and Virtual Reality},
author = {Maier, Sophia and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of Mensch Und Computer 2024 (MuC '24)},
year = {2024},
pages = {617-621},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-muc-ballroom-dance-training-with-motion-capture-and-virtual-reality-preprint.pdf},
doi = {10.1145/3670653.3677499}
}
Abstract: This paper investigates the integration of motion capture and virtual reality (VR) technologies in competitive ballroom dancing (slow walz, tango, slow foxtrott, viennese waltz, quickstep), aiming to analyze posture correctness and provide feedback to dancers for posture enhancement. Through qualitative interviews, the study identifies specific requirements and gathers insights into potentially helpful feedback mechanisms. Using Unity and motion capture technology, we implemented a prototype system featuring real-time visual cues for posture correction and a replay function for analysis. A validation study with competitive ballroom dancers reveals generally positive feedback on the system’s usefulness, though challenges like cable obstruction and bad usability of the user interface are noted. Insights from participants inform future refinements, emphasizing the need for precise feedback, cable-free movement, and user-friendly interfaces. While the program is promising for ballroom dance training, further research is needed to evaluate the system’s overall efficacy.
Sophia C. Steinhaeusser, Elisabeth Ganal, Murat Yalcin, Marc Erich Latoschik, Birgit Lugrin,
Binded to the Lights – Storytelling with a Physically Embodied and a Virtual Robot using Emotionally Adapted Lights
, In
2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN)
, pp. 2117-2124
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@proceedings{10731419,
title = {Binded to the Lights – Storytelling with a Physically Embodied and a Virtual Robot using Emotionally Adapted Lights},
author = {Steinhaeusser, Sophia C. and Ganal, Elisabeth and Yalcin, Murat and Latoschik, Marc Erich and Lugrin, Birgit},
booktitle = {2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN)},
year = {2024},
pages = {2117-2124},
url = {},
doi = {10.1109/RO-MAN60168.2024.10731419}
}
Abstract: Virtual environments (VEs) can be designed to evoke specific emotions for example by using colored light, not only applicable for games but also for virtual storytelling with a single storyteller. Social robots are perfectly suited as storytellers due to their multimodality. However, there is no research yet on the transferability of robotic storytelling to virtual reality (VR). In addition, the transfer of concepts from VE design such as adaptive room illumination to robotic storytelling has yet not been tested. Thus, we conducted a study comparing the same robotic storytelling with a physically embodied robotic storyteller and in VR to investigate the transferability of robotic storytelling to VR. As a second factor, we manipulated the room light following design guidelines for VEs or kept it constant. Results show that a virtual robotic storyteller is not perceived worse than a physically embodied storyteller, suggesting the applicability of virtual static robotic storytellers. Regarding emotion-driven lighting, no significant effect of colored lights on self-reported emotions was found, but adding colored light increased the social presence of the robot and its’ perceived competence in both VR and reality. As our study was limited by a static robotic storyteller not using bodily expressiveness future work is needed to investigate the interaction between well-researched robot modalities and the rather new modality of colored light based on our results.
Andreas Halbig, Marc Erich Latoschik,
Common Cues? Toward the Relationship of Spatial Presence and the Sense of Embodiment
, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 1117-1126
.
Los Alamitos, CA, USA
:
IEEE Computer Society
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{halbig2024common,
title = {Common Cues? Toward the Relationship of Spatial Presence and the Sense of Embodiment},
author = {Halbig, Andreas and Latoschik, Marc Erich},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2024},
pages = {1117-1126},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ISMAR-halbig-common-cues.pdf},
doi = {10.1109/ISMAR62088.2024.00128}
}
Abstract: The sense of presence and the sense of embodiment are two fundamental qualia, pivotal to many virtual reality experiences. Empirical research indicates a notable interdependence between these two qualia, where manipulations designed to affect one often exhibit a concurrent influence on the other. Existing theories on the development of qualia in virtual reality make no or only insufficient statements on this deep interdependence. In this work, we present a novel theoretical perspective on this connection. Based on existing theories, we argue that all the fundamental cues influencing one quale have the potential to impact the other one too. We present three studies ($n=42, n=42, n=32$) that generally support this novel perspective. Among other things, they show that traditional spatial presence cues such as head-tracking and passive depth cues (stereoscopy, linear perspective, etc.) can potentially also serve as embodiment cues. Conversely, they show that typical embodiment cues such as the visuotactile and visuoproprioceptive synchrony of a virtual hand are also spatial presence cues. The cues only differ in terms of how strongly they influence the respective quale. This novel perspective not only enhances our understanding of fundamental mechanics of virtual reality but it can also guide the development of more effective measurement instruments.
Murat Yalcin, Marc Erich Latoschik,
DeepFear: Game Usage within Virtual Reality to Provoke Physiological Responses of Fear
, In
Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
, p. 1–8
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{Yalcin2024,
title = {DeepFear: Game Usage within Virtual Reality to Provoke Physiological Responses of Fear},
author = {Yalcin, Murat and Latoschik, Marc Erich},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
year = {2024},
pages = {1–8},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3650877},
doi = {10.1145/3613905.3650877}
}
Abstract: The investigation and the classification of the physiological signals involved in fear perception is complicated due to the difficulties in reliably eliciting and measuring the complex construct of fear. Especially, using Virtual Reality (VR) games can well elicit the physiological responses, then it can be used developing treatments in healthcare domain. In this study, we carried out exploratory physiological data analysis and wearable sensory device feasibility for the responses of fear. We contributed 1) to use a of-the-shelf commercial game (Half Life-Alyx) to provoke fear emotion, 2) to demonstrate a performance analysis with different deep learning models like Convolutional Neural Network (CNN), Long-Short Term Memory (LSTM) and Transformer, 3) to investigate the most responsive physiological signal by comprehensive data analysis and best sensory device in terms of multi-level of fear classification. Accuracy metrics, f1-scores and confusion matrices showed that ECG and ACC are the most significant two signals for fear recognition.
Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR
, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 1127-1136
.
IEEE Computer Society
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2024voice,
title = {Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR},
author = {Merz, Christian and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2024},
pages = {1127-1136},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-does-voice-matter-preprint.pdf},
doi = {10.1109/ISMAR62088.2024.00129}
}
Abstract: This work evaluates how the asymmetry of device configurations and verbal communication influence the user experience of social eXtended Reality (XR) for self-perception, other-perception, and task perception. We developed an application that enables social collaboration between two users with varying device configurations. We compare the conditions of one symmetric interaction, where both device configurations are Head-Mounted Displays (HMDs) with tracked controllers, with the conditions of one asymmetric interaction, where one device configuration is an HMD with tracked controllers and the other device configuration is a desktop screen with a mouse. In our study, 52 participants collaborated in a dyadic interaction on a sorting task while talking to each other. We compare our results to previous work that evaluated the same scenario without verbal communication. In line with prior research, self-perception is influenced by the immersion of the used device configuration and verbal communication. While co-presence was not affected by the device configuration or the inclusion of verbal communication, social presence was only higher for HMD configurations that allowed verbal communication. Task perception was hardly affected by the device configuration or verbal communication. We conclude that the device in social XR is important for self-perception with or without verbal communication. However, the results indicate that the device configuration only affects the qualities of social interaction in collaborative scenarios when verbal communication is enabled. To sum up, asymmetric collaboration maintains the high quality of self-perception and interaction for highly immersed users while still enabling the participation of less immersed users.
Vivek Nair, Mark Roman Miller, Rui Wang, Brandon Huang, Christian Rack, Marc Erich Latoschik, James O'Brien,
Effect of Data Degradation on Motion Re-Identification
, In
2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)
.
2024.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{nair2024effect,
title = {Effect of Data Degradation on Motion Re-Identification},
author = {Nair, Vivek and Miller, Mark Roman and Wang, Rui and Huang, Brandon and Rack, Christian and Latoschik, Marc Erich and O'Brien, James},
booktitle = {2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},
year = {2024},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-06-nair-obfuscation.pdf},
doi = {10.1109/WoWMoM60985.2024.00026}
}
Mark R Miller, Vivek C Nair, Eugy Han, Cyan DeVeaux, Christian Rack, Rui Wang, Brandon Huang, Marc Erich Latoschik, James F O'Brien, Jeremy N Bailenson,
Effect of Duration and Delay on the Identifiability of VR Motion
, In
2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)
.
2024.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{miller2024effect,
title = {Effect of Duration and Delay on the Identifiability of VR Motion},
author = {Miller, Mark R and Nair, Vivek C and Han, Eugy and DeVeaux, Cyan and Rack, Christian and Wang, Rui and Huang, Brandon and Latoschik, Marc Erich and O'Brien, James F and Bailenson, Jeremy N},
booktitle = {2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},
year = {2024},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-06-Miller-effect-of-duration-and-delay.pdf},
doi = {10.1109/WoWMoM60985.2024.00023}
}
Pascal Martinez Pankotsch, Sebastian Oberdörfer, Marc Erich Latoschik,
Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR
, In
Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{martinezpankotsch2024effects,
title = {Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR},
author = {Martinez Pankotsch, Pascal and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)},
year = {2024},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-agent-encouragement-peer-pressure-preprint.pdf}
}
Abstract: Our study investigated how virtual agents impact users in challenging VR environments, exploring if nonverbal animations affect social pressure, positive encouragement, and trust in 30 female participants. Despite showing signs of pressure and support during the experimental trials, we could not find significant differences in post-exposure measurements of social pressure and encouragement, interpersonal trust, and well-being. While inconclusive, the findings suggest potential, indicating the need for further research with improved animations and a larger sample size for validation.
Nina Döllinger, Jessica Topel, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik, Jean-Luc Lugrin,
Exploring Agent-User Personality Similarity and Dissimilarity for Virtual Reality Psychotherapy
, In
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{dollinger2024exploring,
title = {Exploring Agent-User Personality Similarity and Dissimilarity for Virtual Reality Psychotherapy},
author = {Döllinger, Nina and Topel, Jessica and Botsch, Mario and Wienrich, Carolin and Latoschik, Marc Erich and Lugrin, Jean-Luc},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2024},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-pandas-personality.pdf}
}
Abstract: Imaginary self-encounters are a common approach in psychotherapy. Recent virtual reality advancements enable innovative approaches to enhanced self-encounters using photorealistic personalized Doppelgangers (DG). Yet, next to appearance, similarity in body language could be a great driver of self-identification with a DG or a generic agent. One cost-efficient and time-saving approach could be personality-enhanced animations.
We present a pilot study evaluating the effects of personality-enhanced body language in DGs and generic agents.
Eleven participants evaluated a Photorealistic DG and a Generic Agent, animated in a seated position to simulate four personality types: Low and High Extraversion and Low and High Emotional Stability.
Participants rated the agents' personalities and their self-identification with them.
We found an overall positive relationship between a calculated personality similarity score, self-attribution, and perceived behavior-similarity. Perceived appearance-similarity was affected by personality similarity only in generic agents, indicating the potential of body language to provoke a feeling of similarity even in dissimilar-appearing agents.
David Mal, Erik Wolf, Nina Döllinger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
From 2D-Screens to VR: Exploring the Effect of Immersion on the Plausibility of Virtual Humans
, In
CHI 24 Conference on Human Factors in Computing Systems Extended Abstracts
, pp. 1-8
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{mal2024vhpvr,
title = {From 2D-Screens to VR: Exploring the Effect of Immersion on the Plausibility of Virtual Humans},
author = {Mal, David and Wolf, Erik and Döllinger, Nina and Botsch, Mario and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {CHI 24 Conference on Human Factors in Computing Systems Extended Abstracts},
year = {2024},
pages = {1-8},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-chi-vhp-in-vr-preprint.pdf},
doi = {10.1145/3613905.3650773}
}
Abstract: Virtual humans significantly contribute to users' plausible XR experiences. However, it may be not only the congruent rendering of the virtual human but also the intermediary display technology having a significant impact on virtual humans' plausibility. In a low-immersive desktop-based and a high-immersive VR condition, participants rated realistic and abstract animated virtual humans regarding plausibility, affective appraisal, and social judgments. First, our results confirmed the factor structure of a preliminary virtual human plausibility questionnaire in VR. Further, the appearance and behavior of realistic virtual humans were overall perceived as more plausible compared to abstract virtual humans, an effect that increased with high immersion. Moreover, only for high immersion, realistic virtual humans were rated as more trustworthy and sympathetic than abstract virtual humans. Interestingly, we observed a potential uncanny valley effect for low but not for high immersion. We discuss the impact of a natural perception of anthropomorphic and realistic cues in VR and highlight the potential of immersive technology to elicit distinct effects in virtual humans.
Marie Luisa Fiedler, Erik Wolf, Nina Döllinger, David Mal, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
From Avatars to Agents: Self-Related Cues through Embodiment and Personalization Affect Body Perception in Virtual Reality
, In
IEEE Transactions on Visualization and Computer Graphics
, pp. 1-11
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{fiedler2024selfcues,
title = {From Avatars to Agents: Self-Related Cues through Embodiment and Personalization Affect Body Perception in Virtual Reality},
author = {Fiedler, Marie Luisa and Wolf, Erik and Döllinger, Nina and Mal, David and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2024},
pages = {1-11},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-tvcg-self-identification-body-weight-perception-preprint-reduced.pdf},
doi = {10.1109/TVCG.2024.3456211}
}
Abstract: Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a 2x2 mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.
Carolin Wienrich, Marc Erich Latoschik, David Obremski,
Gender Differences and Social Design in Human-AI Collaboration: Insights from Virtual Cobot Interactions Under Varying Task Loads
, In
Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3613905.3650827,
title = {Gender Differences and Social Design in Human-AI Collaboration: Insights from Virtual Cobot Interactions Under Varying Task Loads},
author = {Wienrich, Carolin and Latoschik, Marc Erich and Obremski, David},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
year = {2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3650827},
doi = {10.1145/3613905.3650827}
}
Abstract: This work explores the effects of users’ gender and social design features of AI under different task load conditions on human-like attributions, social impact, work performance and perceived workload, user experience, and various other measures in Human-AI Interaction (HAII). Users had to execute sorting and dispatch tasks in collaboration with a virtual cobot. The degree of social gestalt of the cobot was varied by the ability to make small talk (i.e., talkative vs. non-talkative cobot), and the task load was increased by adding a secondary task (i.e., high vs. low task load condition). Overall, the talkative cobot led to a more positive perception of the cobot and increased social qualities like sense of meaning and team membership compared to the non-talkative cobot. The following gender effect was particularly interesting. The talkative cobot had a buffering effect for women and a distraction conflict effect for men in high task load conditions. When interacting with the talkative robot, women find the high task condition less stressful. In contrast thereto, the talkative cobot was distracting for men in the high task load condition. Our results highlight that social design choices and interindividual differences influence a successful collaboration between humans and AI. The work also shows the added value of systematic XR-simulations for the investigation and design of human-centered HAIIs (eXtended AI approach).
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik,
Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities
, In
IEEE Transactions on Visualization and Computer Graphics
, pp. 1-11
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10460576,
title = {Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities},
author = {Kern, Florian and Tschanter, Jonathan and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2024},
pages = {1-11},
url = {https://ieeexplore.ieee.org/document/10460576},
doi = {10.1109/TVCG.2024.3372124}
}
Abstract: Text input is desirable across various eXtended Reality (XR) use cases and is particularly crucial for knowledge and office work. This article compares handwriting text input between Virtual Reality (VR) and Video See-Through Augmented Reality (VST AR), facilitated by physically aligned and mid-air surfaces when writing simple and complex sentences. In a 2x2x2 experimental design, 72 participants performed two ten-minute handwriting sessions, each including ten simple and ten complex sentences representing text input in real-world scenarios. Our developed handwriting application supports different XR displays, surface alignments, and handwriting recognition based on digital ink. We evaluated usability, user experience, task load, text input performance, and handwriting style. Our results indicate high usability with a successful transfer of handwriting skills to the virtual domain. XR displays and surface alignments did not impact text input speed and error rate. However, sentence complexities did, with participants achieving higher input speeds and fewer errors for simple sentences (17.85 WPM, 0.51% MSD ER) than complex sentences (15.07 WPM, 1.74% MSD ER). Handwriting on physically aligned surfaces showed higher learnability and lower physical demand, making them more suitable for prolonged handwriting sessions. Handwriting on mid-air surfaces yielded higher novelty and stimulation ratings, which might diminish with more experience. Surface alignments and sentence complexities significantly affected handwriting style, leading to enlarged and more connected cursive writing in both mid-air and for simple sentences. The study also demonstrated the benefits of using XR controllers in a pen-like posture to mimic styluses and pressure-sensitive tips on physical surfaces for input detection. We additionally provide a phrase set of simple and complex sentences as a basis for future text input studies, which can be expanded and adapted.
Vivek Nair, Christian Rack, Wenbo Guo, Rui Wang, Shuixian Li, Brandon Huang, Atticus Cull, James F. O'Brien, Marc Latoschik, Louis Rosenberg, Dawn Song,
Inferring Private Personal Attributes of Virtual Reality Users from Ecologically Valid Head and Hand Motion Data
, In
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 477-484
.
2024.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10536245,
title = {Inferring Private Personal Attributes of Virtual Reality Users from Ecologically Valid Head and Hand Motion Data},
author = {Nair, Vivek and Rack, Christian and Guo, Wenbo and Wang, Rui and Li, Shuixian and Huang, Brandon and Cull, Atticus and O'Brien, James F. and Latoschik, Marc and Rosenberg, Louis and Song, Dawn},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2024},
pages = {477-484},
url = {},
doi = {10.1109/VRW62533.2024.00094}
}
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik,
Influence of Virtual Shoe Formality on Gait and Cognitive Performance in a VR Walking Task
, In
Proceedings of the 31st IEEE Virtual Reality conference (VR '24)
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{oberdorfer2024influence,
title = {Influence of Virtual Shoe Formality on Gait and Cognitive Performance in a VR Walking Task},
author = {Oberdörfer, Sebastian and Birnstiel, Sandra and Latoschik, Marc Erich},
booktitle = {Proceedings of the 31st IEEE Virtual Reality conference (VR '24)},
year = {2024},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-stroop-shoes-preprint.pdf}
}
Abstract: Depending on their formality, clothes do not only change one's appearance, but can also influence behavior and cognitive processes. Shoes are a special aspect of an outfit. Besides coming in various degrees of formality, their structure can affect human gait. Avatars used to embody users in immersive Virtual Reality (VR) can wear any kind of clothing. According to the Proteus Effect, the appearance of a user's avatar can influence their behavior. Users change their behavior in accordance to the expected behavior of the avatar. In our study, we embody 39 participants with a generic avatar of the user's gender wearing three different pairs of shoes as within condition. The shoes differ in degree of formality. We measure the gait during a 2-minute walking task during which participants wore the same real shoe and assess selective attention using the Stroop task. Our results show significant differences in gait between the tested virtual shoe pairs. We found small effects between the three shoe conditions with respect to selective attention. However, we found no significant differences with respect to correct items and response time in the Stroop task. Thus, our results indicate that virtual shoes are accepted by users and, although not eliciting any physical constraints, lead to changes in gait. This suggests that users not only adjust personal behavior according to the Proteus Effect, but also are affected by virtual biomechanical constraints. Also, our results suggest a potential influence of virtual clothing on cognitive performance.
Kristoffer Waldow, Lukas Decker, Martin Mišiak, Arnulph Fuhrmann, Daniel Roth, Marc Erich Latoschik,
Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering
, In
Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)
.
IEEE
, 2024.
Best poster award 🏆.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{waldow2024investigating,
title = {Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering},
author = {Waldow, Kristoffer and Decker, Lukas and Mišiak, Martin and Fuhrmann, Arnulph and Roth, Daniel and Latoschik, Marc Erich},
booktitle = {Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)},
year = {2024},
publisher = {IEEE},
note = {Best poster award 🏆.},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-incoherent-depth-cues.pdf}
}
Abstract: Depth perception is essential for our daily experiences, aiding in
orientation and interaction with our surroundings. Virtual Reality
allows us to decouple such depth cues mainly represented through
binocular disparity and motion parallax. Dealing with fully mesh-based rendering methods these cues are not problematic as they originate from the object’s underlying geometry. However, manipulating motion parallax, as seen in stereoscopic imposter-based
rendering, raises multiple perceptual questions. Therefore, we conducted a user experiment to investigate how varying object sizes affect such visual errors and perceived 3-dimensionality, revealing an interestingly significant negative correlation and new assumptions about visual quality.
Larissa Brübach, Mona Röhm, Franziska Westermeier, Marc Erich Latoschik, Carolin Wienrich,
Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR
, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
.
IEEE Computer Society
, 2024.
IEEE ISMAR Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{brubach2024manipulating,
title = {Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR},
author = {Brübach, Larissa and Röhm, Mona and Westermeier, Franziska and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2024},
publisher = {IEEE Computer Society},
note = {IEEE ISMAR Best Paper Nominee 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-manipulating-immersion.pdf},
doi = {10.1109/ISMAR62088.2024.00124}
}
Abstract: This work presents a study where we used incongruencies on the cognitive and the perceptual layer to investigate their effects on perceived plausibility and, thereby, presence and spatial presence. We used a 2x3 within-subject design with the factors familiar size (cognitive manipulation) and immersion (perceptual manipulation). For the different levels of immersion, we implemented three different tracking qualities: rotation-and-translation tracking, rotation-only tracking, and stereoscopic-view-only tracking. Participants scanned products in a virtual supermarket where the familiar size of these objects was manipulated. Simultaneously, they could either move their head normally or need to use the thumbsticks to navigate their view of the environment. Results show that both manipulations had a negative effect on perceived plausibility and, thereby, presence. In addition, the tracking manipulation also had a negative effect on spatial presence. These results are especially interesting in light of the ongoing discussion about the role of plausibility and congruence in evaluating XR environments. The results can hardly be explained by traditional presence models, where immersion should not be an influencing factor for perceived plausibility. However, they are in agreement with the recently introduced Congruence and Plausibility (CaP) model and provide empirical evidence for the model's predicted pathways.
Martin J. Koch, Astrid Carolus, Carolin Wienrich, Marc Erich Latoschik,
Meta AI Literacy Scale: Further validation and development of a short version
, In
Heliyon
, p. 23
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{koch2024literacy,
title = {Meta AI Literacy Scale: Further validation and development of a short version},
author = {Koch, Martin J. and Carolus, Astrid and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {Heliyon},
year = {2024},
pages = {23},
url = {https://www.cell.com/heliyon/fulltext/S2405-8440(24)15717-9},
doi = {10.1016/j.heliyon.2024.e39686}
}
Abstract: The concept of AI literacy, its promotion, and measurement are important topics as they prepare society for the steadily advancing spread of AI technology. The first purpose of the current study is to advance the measurement of AI literacy by collecting evidence regarding the validity of the Meta AI Literacy Scale (MAILS) by Carolus and colleagues published in 2023: a self-assessment instrument for AI literacy and additional psychological competencies conducive for the use of AI. For this purpose, we first formulated the intended measurement purposes of the MAILS. In a second step, we derived empirically testable axioms and subaxioms from the purposes. We tested them in several already published and newly collected data sets. The results are presented in the form of three different empirical studies. We found overall evidence for the validity of the MAILS with some unexpected findings that require further research. We discuss the results for each study individually and also together. Also, avenues for future research are discussed. The study’s second purpose is to develop a short version (10 items) of the original instrument (34 items). It was possible to find a selection of ten items that represent the factors of the MAILS and show a good model fit when tested with confirmatory factor analysis. Further research will be needed to validate the short scale. This paper advances the knowledge about the validity and provides a short measure for AI literacy. However, more research will be necessary to further our understanding of the relationships between AI literacy and other constructs.
Christian Rack, Lukas Schach, Felix Achter, Yousof Shehada, Jinghuai Lin, Marc Erich Latoschik,
Motion Passwords
, In
Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology
(
19)
, pp. 1-11
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@conference{rack2024motion,
title = {Motion Passwords},
author = {Rack, Christian and Schach, Lukas and Achter, Felix and Shehada, Yousof and Lin, Jinghuai and Latoschik, Marc Erich},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
year = {2024},
number = {19},
pages = {1-11},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3641825.3687711},
doi = {10.1145/3641825.3687711}
}
Abstract: This paper introduces “Motion Passwords”, a novel biometric authentication approach where virtual reality users verify their identity by physically writing a chosen word in the air with their hand controller. This method allows combining three layers of verification: knowledge-based password input, handwriting style analysis, and motion profile recognition. As a first step towards realizing this potential, we focus on verifying users based on their motion profiles. We conducted a data collection study with 48 participants, who performed over 3800 Motion Password signatures across two sessions. We assessed the effectiveness of feature-distance and similarity-learning methods for motion-based verification using the Motion Passwords as well as specific and uniform ball-throwing signatures used in previous works. In our results, the similarity-learning model was able to verify users with the same accuracy for both signature types. This demonstrates that Motion Passwords, even when applying only the motion-based verification layer, achieve reliability comparable to previous methods. This highlights the potential for Motion Passwords to become even more reliable with the addition of knowledge-based and handwriting style verification layers. Furthermore, we present a proof-of-concept Unity application demonstrating the registration and verification process with our pretrained similarity-learning model. We publish our code, the Motion Password dataset, the pretrained model, and our Unity prototype on https://github.com/cschell/MoPs
Christian Rack, Vivek Nair, Lukas Schach, Felix Foschum, Marcel Roth, Marc Erich Latoschik,
Navigating the Kinematic Maze: Analyzing, Standardizing and Unifying XR Motion Datasets
, In
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{noauthororeditor2024navigating,
title = {Navigating the Kinematic Maze: Analyzing, Standardizing and Unifying XR Motion Datasets},
author = {Rack, Christian and Nair, Vivek and Schach, Lukas and Foschum, Felix and Roth, Marcel and Latoschik, Marc Erich},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2024},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-01-Rack-Navigating_the_Kinematic_Maze.pdf},
doi = {10.1109/VRW62533.2024.00098}
}
Abstract: This paper addresses the critical importance of standards and documentation in kinematic research, particularly within Extended Reality (XR) environments. We focus on the pivotal role of motion data, emphasizing the challenges posed by the current lack of standardized practices in XR user motion datasets. Our work involves a detailed analysis of 8 existing datasets, identifying gaps in documentation and essential specifications such as coordinate systems, rotation representations, and units of measurement. We highlight how these gaps can lead to misinterpretations and irreproducible results. Based on our findings, we propose a set of guidelines and best practices for creating and documenting motion datasets, aiming to improve their quality, usability, and reproducibility. We also created a web-based tool for visual inspection of motion recordings, further aiding in dataset evaluation and standardization. Furthermore, we introduce the XR Motion Dataset Catalogue, a collection of the analyzed datasets in a unified and aligned format. This initiative significantly streamlines access for researchers, allowing them to download partial or entire datasets with a single line of code and without the need for additional alignment efforts. Our contributions enhance dataset integrity and reliability in kinematic research, paving the way for more consistent and scientifically robust studies in this evolving field.
Smi Hinterreiter, Timo Spinde, Sebastian Oberdörfer, Isao Echizen, Marc Erich Latoschik,
News Ninja: Gamified Annotation Of Linguistic Bias In
Online News
, In
Proceedings of the ACM Human-Computer Interaction
, Vol.
8
(
CHI PLAY, Article 327)
, p. 29
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{hinterreiter2024ninja,
title = {News Ninja: Gamified Annotation Of Linguistic Bias In
Online News},
author = {Hinterreiter, Smi and Spinde, Timo and Oberdörfer, Sebastian and Echizen, Isao and Latoschik, Marc Erich},
journal = {Proceedings of the ACM Human-Computer Interaction},
year = {2024},
volume = {8},
number = {CHI PLAY, Article 327},
pages = {29},
url = {https://dl.acm.org/doi/10.1145/3677092},
doi = {10.1145/3677092}
}
Abstract: Recent research shows that visualizing linguistic bias mitigates its negative effects. However, reliable automatic detection methods to generate such visualizations require costly, knowledge-intensive training data. To facilitate data collection for media bias datasets, we present News Ninja, a game employing data-collecting game mechanics to generate a crowdsourced dataset. Before annotating sentences, players are educated on media bias via a tutorial. Our findings show that datasets gathered with crowdsourced workers trained on News Ninja can reach significantly higher inter-annotator agreements than expert and crowdsourced datasets with similar data quality. As News Ninja encourages continuous play, it allows datasets to adapt to the reception and contextualization of news over time, presenting a promising strategy to reduce data collection expenses, educate players, and promote long-term bias mitigation.
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Object Motion Manipulation and time perception in virtual reality
, In
Frontiers in Virtual Reality
, Vol.
5
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2024.1390703,
title = {Object Motion Manipulation and time perception in virtual reality},
author = {Landeck, Maximilian and Unruh, Fabian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2024},
volume = {5},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1390703},
doi = {10.3389/frvir.2024.1390703}
}
Abstract: This paper presents a novel approach to altering how time is perceived in Virtual Reality (VR). It involves manipulating the speed and pattern of motion in objects associated with timekeeping, both directly (such as clocks) and indirectly (like pendulums). Objects influencing our perception of time are called ‘zeitgebers‘; for instance, observing a clock or pendulum tends to affect how we perceive the passage of time. The speed of motion of their internal parts (clock hands or pendulum rings) is explicitly or implicitly related to the perception of time. However, the perceptual effects of accelerating or decelerating the speed of a virtual clock or pendulum in VR is still an open question. We hypothesize that the acceleration of their internal motion will accelerate the passage of time and that the irregularity of the orbit pendulum’s motion will amplify this effect. We anticipate that the irregular movements of the pendulum will lower boredom and heighten attention, thereby making time seem to pass more quickly. Therefore, we conducted an experiment with 32 participants, exposing them to two types of virtual zeitgebers exhibiting both regular and irregular motions. These were a virtual clock and an orbit pendulum, each operating at slow, normal, and fast speeds. Our results revealed that time passed by faster when participants observed virtual zeitgebers in the fast speed condition than in the slow speed condition. The orbit pendulum significantly accelerated the perceived passage of time compared to the clock. We believe that the irregular motion requires a higher degree of attention, which is confirmed by the significantly longer gaze fixations of the participants. These findings are crucial for time perception manipulation in VR, offering potential for innovative treatments for conditions like depression and improving wellbeing. Yet, further clinical research is needed to confirm these applications.
Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Out-Of-Virtual-Body Experiences: Virtual Disembodiment Effects on Time Perception in VR
, In
Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology
Benjamin Weyers, Daniel Zielasko, Rob Lindeman, Stefania Serafin, Eike Langbehn, Victoria Interrante, Gerd Bruder, J. Edward Swan II, Christoph Borst, Carolin Wienrich, Rebecca Fribourg (Eds.),
(
20)
, pp. 20:1-20:11
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{conf/vrst/UnruhLL24,
title = {Out-Of-Virtual-Body Experiences: Virtual Disembodiment Effects on Time Perception in VR},
author = {Unruh, Fabian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
editor = {Weyers, Benjamin and Zielasko, Daniel and Lindeman, Rob and Serafin, Stefania and Langbehn, Eike and Interrante, Victoria and Bruder, Gerd and II, J. Edward Swan and Borst, Christoph and Wienrich, Carolin and Fribourg, Rebecca},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
year = {2024},
number = {20},
pages = {20:1-20:11},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3641825.3687717},
doi = {10.1145/3641825.3687717}
}
Abstract: This paper presents a novel experiment investigating the relationship between virtual disembodiment and time perception in Virtual Reality (VR). Recent work demonstrated that the absence of a virtual body in a VR application changes the perception of time. However, the effects of simulating an out-of-body experience (OBE) in VR on time perception are still unclear. We designed an experiment with two types of virtual disembodiment techniques based on viewpoint gradual transition: a virtual body’s behind view and facing view transitions. We investigated their effects on forty-four participants in an interactive scenario where a lamp was repeatedly activated and time intervals were estimated. Our results show that, while both techniques elicited a significant virtual disembodiment perception, time duration estimations in the minute range were only shorter in the facing view compared to the eye view condition. We believe that reducing agency in the facing view is a key factor in the time perception alteration. This provides first steps towards a novel approach to manipulating time perception in VR, with potential applications for mental health treatments such as schizophrenia or depression and for improving our understanding of the relation between body, virtual body, and time.
Martin J. Koch, Carolin Wienrich, Samantha Straka, Marc Erich Latoschik, Astrid Carolus,
Overview and confirmatory and exploratory factor analysis of AI literacy scale
, In
Computers and Education: Artificial Intelligence
, Vol.
7
, p. 100310
.
Elsevier BV
, 2024.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{Koch_2024,
title = {Overview and confirmatory and exploratory factor analysis of AI literacy scale},
author = {Koch, Martin J. and Wienrich, Carolin and Straka, Samantha and Latoschik, Marc Erich and Carolus, Astrid},
journal = {Computers and Education: Artificial Intelligence},
year = {2024},
volume = {7},
pages = {100310},
publisher = {Elsevier BV},
url = {http://dx.doi.org/10.1016/j.caeai.2024.100310},
doi = {10.1016/j.caeai.2024.100310}
}
Christian Merz, Jonathan Tschanter, Florian Kern, Jean-Luc Lugrin, Carolin Wienrich, Marc Erich Latoschik,
Pipelining Processors for Decomposing Character Animation
, In
30th ACM Symposium on Virtual Reality Software and Technology
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2024processor,
title = {Pipelining Processors for Decomposing Character Animation},
author = {Merz, Christian and Tschanter, Jonathan and Kern, Florian and Lugrin, Jean-Luc and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {30th ACM Symposium on Virtual Reality Software and Technology},
year = {2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3641825.3689533},
doi = {10.1145/3641825.3689533}
}
Abstract: This paper presents an openly available implementation of a modular pipeline architecture for character animation. It effectively decomposes frequently necessary processing steps into dedicated character processors, such as copying data from various motion sources, applying inverse kinematics, or scaling the character. Processors can easily be parameterized, extended (e.g., with AI), and freely arranged or even duplicated in any order necessary, greatly reducing side effects and fostering fine-tuning, maintenance, and reusability of the complex interplay of real-time animation steps.
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik,
Proteus Effect or Bodily Affordance? The Influence of Virtual High-Heels on Gait Behavior
, In
Virtual Reality
, Vol.
28
(
2)
, p. 81
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{oberdorfer2024proteus,
title = {Proteus Effect or Bodily Affordance? The Influence of Virtual High-Heels on Gait Behavior},
author = {Oberdörfer, Sebastian and Birnstiel, Sandra and Latoschik, Marc Erich},
journal = {Virtual Reality},
year = {2024},
volume = {28},
number = {2},
pages = {81},
url = {https://rdcu.be/dCXMh},
doi = {10.1007/s10055-024-00966-5}
}
Abstract: Shoes are an important part of the fashion industry, stereotypically affect our self-awareness as well as external perception, and can even biomechanically modify our gait pattern. Immersive Virtual Reality (VR) enables users not only to explore virtual environments, but also to control an avatar as a proxy for themselves. These avatars can wear any kind of shoe which might similarly affect self-awareness due to the Proteus Effect and even cause a bodily affordance to change the gait pattern. Bodily affordance describes a behavioral change in accordance with the expected constraints of the avatar a user is embodied with. In this article, we present the results of three user studies investigating potential changes in the gait pattern evoked by wearing virtual high-heels. Two user studies targeted female participants and one user study focused male participants. The participants wore either virtual sneakers or virtual high-heels while constantly wearing sneakers or socks in reality. To measure the gait pattern, the participants walked on a treadmill that also was added to the virtual environment. We measured significant differences in stride length and in the flexion of the hips and knees at heel strike and partly at toe off. Also, participants reported to walk more comfortably in the virtual sneakers in contrast to the virtual high-heels. This indicates a strong acceptance of the virtual shoes as their real shoes and hence suggests the existence of a bodily affordance. While sparking a discussion about the boundaries as well as aspects of the Proteus Effect and providing another insight into the effects of embodiment in VR, our results might also be important for researchers and developers.
Sebastian Oberdörfer, Sophia C Steinhaeusser, Amiin Najjar, Clemens Tümmers, Marc Erich Latoschik,
Pushing Yourself to the Limit - Influence of Emotional Virtual Environment Design on Physical Training in VR
, In
ACM Games
, Vol.
2
(
4)
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{oberdorfer2024pushing,
title = {Pushing Yourself to the Limit - Influence of Emotional Virtual Environment Design on Physical Training in VR},
author = {Oberdörfer, Sebastian and Steinhaeusser, Sophia C and Najjar, Amiin and Tümmers, Clemens and Latoschik, Marc Erich},
journal = {ACM Games},
year = {2024},
volume = {2},
number = {4},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-acmgames-sport-ve-preprint.pdf}
}
Abstract: The design of virtual environments (VEs) can strongly influence users' emotions. These VEs are also an important aspect of immersive Virtual Reality (VR) exergames - training system that can inspire athletes to train in a highly motivated way and achieve a higher training intensity. VR-based training and rehabilitation systems can increase a user's motivation to train and to repeat physical exercises. The surrounding VE can potentially predominantly influence users' motivation and hence potentially even physical performance. Besides providing potentially motivating environments, physical training can be enhanced by gamification. However, it is unclear whether the surrounding VE of a VR-based physical training system influences the effectiveness of gamification. We investigate whether an emotional positive or emotional negative design influences the sport performance and interacts with the positive effects of gamification. In a user study, we immerse participants in VEs following either an emotional positive, neutral, or negative design and measure the duration the participants can hold a static strength-endurance exercise. The study targeted the investigation of the effects of 1) emotional VE design as well as the 2) presence and absence of gamification. We did not observe significant differences in the performance of the participants independent of the conditions of VE design or gamification. Gamification caused a dominating effect on emotion and motivation over the emotional design of the VEs, thus indicating an overall positive impact. The emotional design influenced the participants' intrinsic motivation but caused mixed results with respect to emotion. Overall, our results indicate the importance of using gamification, support the commonly used emotional positive VEs for physical training, but further indicate that the design space could also include other directions of VE design.
Philipp Krop, Martin J. Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich,
The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI
, In
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)
, p. 9
.
New York, NY, USA
:
ACM
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{krop2024effects,
title = {The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI},
author = {Krop, Philipp and Koch, Martin J. and Carolus, Astrid and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)},
year = {2024},
pages = {9},
publisher = {ACM},
address = {New York, NY, USA},
url = {https://dl.acm.org/doi/pdf/10.1145/3613905.3650749},
doi = {10.1145/3613905.3650749}
}
Abstract: Even though people imagine different embodiments when asked which AI they would like to work with, most studies investigate trust in AI systems without specific physical appearances. This study aims to close this gap by combining influencing factors of trust to analyze their impact on the perceived trustworthiness, warmth, and competence of an embodied AI. We recruited 68 par- ticipants who observed three co-working scenes with an embodied AI, presented as expert/novice (expertise), human/AI (humanness), or congruent/slightly incongruent to the environment (congruence). Our results show that the expertise condition had the largest im- pact on trust, acceptance, and perceived warmth and competence. When controlled for perceived competence, the humanness of the AI and the congruence of its embodiment to the environment also influence acceptance. The results show that besides expertise and the perceived competence of the AI, other design variables are rele- vant for successful human-AI interaction, especially when the AI is embodied.
Larissa Brübach, Marius Röhm, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence
, In
Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology
(
3)
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{brubach2024influence,
title = {The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence},
author = {Brübach, Larissa and Röhm, Marius and Westermeier, Franziska and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
year = {2024},
number = {3},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3641825.3687713},
doi = {10.1145/3641825.3687713}
}
Abstract: The Field of View (FoV) is a central technical display characteristic of Head-Mounted Displays (HMDs), which has been shown to have a notable impact on important aspects of the user experience. For example, an increased FoV has been shown to foster a sense of presence and improve peripheral information processing, but it also increases the risk of VR sickness. This article investigates the impact of a wider but inhomogeneous FoV on the perceived plausibility, measuring its effects on presence, spatial presence, and VR sickness as a comparison to and replication of effects from prior work. We developed a low-resolution peripheral display extension to pragmatically increase the FoV, taking into account the lower peripheral acuity of the human eye. While this design results in inhomogeneous resolutions of HMDs at the display edges, it also is a low complexity and low-cost extension. However, its effects on important VR qualities have to be identified. We conducted two experiments with 30 and 27 participants, respectively. In a randomized 2x3 within-subject design, participants played three rounds of bowling in VR, both with and without the display extension. Two rounds contained incongruencies to induce breaks in plausibility. In experiment 2, we enhanced one incongruency to make it more noticeable and improved the shortcomings of the display extension that had previously been identified. However, neither study measured the low-resolution FoV extension's effect in terms of perceived plausibility, presence, spatial presence, or VR sickness. We found that one of the incongruencies could cause a break in plausibility without the extension, confirming the results of a previous study.
Erik Wolf, Carolin Wienrich, Marc Erich Latoschik,
Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality
, In
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 857-858
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2024towards,
title = {Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality},
author = {Wolf, Erik and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2024},
pages = {857-858},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-altered-body-perception-through-modulated-self-preprint.pdf},
doi = {10.1109/VRW62533.2024.00225}
}
Abstract: Self-exposure using modulated embodied avatars in virtual reality (VR) may support a positive body image. However, further investigation is needed to address methodological challenges and to understand the concrete effects, including their quantification. We present an iteratively refined paradigm for studying the tangible effects of exposure to a modulated self in VR. Participants perform body-centered movements in front of a virtual mirror, encountering their photorealistically personalized embodied avatar with increased, decreased, or unchanged body size. Additionally, we propose different body size estimation tasks conducted in reality and VR before and after exposure to assess participants' putative-elicited perceptual adaptations.
Christian Merz, Christopher Göttfert, Carolin Wienrich, Marc Erich Latoschik,
Universal Access for Social XR Across Devices: The Impact of Immersion on the Experience in Asymmetric Virtual Collaboration
, In
Proceedings of the 31st IEEE Virtual Reality conference (VR '24)
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2024universal,
title = {Universal Access for Social XR Across Devices: The Impact of Immersion on the Experience in Asymmetric Virtual Collaboration},
author = {Merz, Christian and Göttfert, Christopher and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 31st IEEE Virtual Reality conference (VR '24)},
year = {2024},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-universal-access-social-xr.pdf},
doi = {10.1109/VR58804.2024.00105}
}
Abstract: This article investigates the influence of input/output device characteristics and degrees of immersion on the User Experience (UX) of specific eXtended Reality (XR) effects, i.e., presence, self-perception, other-perception, and task perception. It targets universal access to social XR, where dedicated XR hardware is unavailable or can not be used, but participation is desirable or even necessary. We compare three different device configurations: (i) desktop screen with mouse, (ii) desktop screen with tracked controllers, and (iii) Head-Mounted Display (HMD) with tracked controllers. 87 participants took part in collaborative dyadic interaction (a sorting task) with asymmetric device configurations in a specifically developed social XR. In line with prior research, the sense of presence and embodiment were significantly lower for the desktop setups. However, we only found minor differences in task load and no differences in usability and enjoyment of the task between the conditions. Additionally, the perceived humanness and virtual human plausibility of the other were not affected, no matter the device used. Finally, there was no impact regarding co-presence and social presence independent of the level of immersion of oneself or the other. We conclude that the device in social XR is important for self-perception and presence. However, our results indicate that the devices do not affect important UX and usability aspects, specifically, the qualities of social interaction in collaborative scenarios, paving the way for universal access to social XR encounters and significantly promoting participation.
Jinghuai Lin, Christian Rack, Carolin Wienrich, Marc Erich Latoschik,
Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality
, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
.
IEEE Computer Society
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{lin2024usability,
title = {Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality},
author = {Lin, Jinghuai and Rack, Christian and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2024},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-social-vr-identity-management-preprint.pdf},
doi = {10.1109/ISMAR62088.2024.00027}
}
Abstract: In social virtual reality (social VR), users are threatened by potential cybercrimes, such as identity theft, sensitive data breaches, and embodied harassment. These concerns are heightened by the increasing interest in the metaverse, the advancements in photorealistic 3D user reconstructions, and the rising incidents of online privacy violations. Designing secure social VR applications that protect users while enhancing their experience, acceptance and trust remains a challenge. This article investigates potential identity management solutions in social VR, and their impacts on usability and user acceptance. We developed a social VR prototype with novel and established countermeasures, including motion biometric verification, and conducted a study with 52 participants. Our findings reveal diverse preferences for identity management and underscore the importance of authenticity, autonomy, and reciprocity. Key findings include: passive verification is favored for pragmatic user experience, while active verification is preferred for its hedonic quality; continuous or periodic verification strengthens users’ confidence in their privacy; and while user awareness promotes authentic engagement, it may also diminish the willingness to disclose personal information. This research not only offers foundational insights into the evaluated scenarios and countermeasures, but also sheds light on the designs of more trustworthy and inclusive social VR applications.
Nina Döllinger, David Mal, Sebastian Keppler, Erik Wolf, Mario Botsch, Johann Habakuk Israel, Marc Erich Latoschik, Carolin Wienrich,
Virtual Body Swapping: A VR-Based Approach to Embodied Third-Person Self-Processing in Mind-Body Therapy
, In
2024 CHI Conference on Human Factors in Computing Systems
, pp. 1-18
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2024bodyswap,
title = {Virtual Body Swapping: A VR-Based Approach to Embodied Third-Person Self-Processing in Mind-Body Therapy},
author = {Döllinger, Nina and Mal, David and Keppler, Sebastian and Wolf, Erik and Botsch, Mario and Israel, Johann Habakuk and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2024 CHI Conference on Human Factors in Computing Systems},
year = {2024},
pages = {1-18},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-chi-bodyswap-preprint.pdf},
doi = {10.1145/3613904.3642328}
}
Abstract: Virtual reality (VR) offers various opportunities for innovative therapeutic approaches, especially regarding self-related mind-body interventions. We introduce a VR body swap system enabling multiple users to swap their perspectives and appearances and evaluate its effects on virtual sense of embodiment (SoE) and perception- and cognition-based self-related processes. In a self-compassion-framed scenario, twenty participants embodied their personalized, photorealistic avatar, swapped bodies with an unfamiliar peer, and reported their SoE, interoceptive awareness (perception), and self-compassion (cognition). Participants' experiences differed between bottom-up and top-down processes. Regarding SoE, their agency and self-location shifted to the swap avatar, while their top-down self-identification remained with their personalized avatar. Further, the experience positively affected interoceptive awareness but not self-compassion. Our outcomes offer novel insights into the SoE in a multiple-embodiment scenario and highlight the need to differentiate between the different processes in intervention design. They raise concerns and requirements for future research on avatar-based mind-body interventions.
Timo Menzel, Erik Wolf, Stephan Wenninger, Niklas Spinczyk, Lena Holderrieth, Ulrich Schwanecke, Marc Erich Latoschik, Mario Botsch,
WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild
, In
TechRxiv
.
2024.
Preprint
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{menzel2024wildavatars,
title = {WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild},
author = {Menzel, Timo and Wolf, Erik and Wenninger, Stephan and Spinczyk, Niklas and Holderrieth, Lena and Schwanecke, Ulrich and Latoschik, Marc Erich and Botsch, Mario},
journal = {TechRxiv},
year = {2024},
note = {Preprint},
url = {https://d197for5662m48.cloudfront.net/documents/publicationstatus/221002/preprint_pdf/475c2f7830adb5d85a17466ac50bc9c5.pdf},
doi = {10.36227/techrxiv.172503940.07538627/v1}
}
Abstract: Realistic full-body avatars play a key role in representing users in virtual environments, where they have been shown to considerably improve body ownership and presence. Driven by the growing demand for realistic virtual humans, extensive research on scanning-based avatar reconstruction has been conducted in recent years. Most methods, however, require complex hardware, such as expensive camera rigs and/or controlled capture setups, thereby restricting avatar generation to specialized labs. We propose WILDAVATARS, an approach that empowers even non-experts without access to complex equipment to capture realistic avatars in the wild. Our avatar generation is based on an easy-to-use smartphone application that guides the user through the scanning process and uploads the captured data to a server, which in a fully automatic manner reconstructs a photorealistic avatar that is ready to be downloaded into a VR application. To increase the availability and foster the use of realistic virtual humans in VR applications we will make WILDAVATARS publicly available for research purposes.
2023
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality
, In
2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 825-826
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{misiak2023subjective,
title = {A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality},
author = {Mišiak, Martin and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2023},
pages = {825-826},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ieeevr-noise-perception.pdf},
doi = {10.1109/VRW58643.2023.00255}
}
Abstract: Temporal reprojection is a popular method for mitigating sampling artifacts from a variety of sources. This work investigates it's impact on the subjective quality of specular reflections in Virtual Reality(VR). Our results show that temporal reprojection is highly effective at improving the visual comfort of specular materials, especially at low sample counts. A slightly diminished effect could also be observed in improving the subjective accuracy of the resulting reflection.
Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
A Systematic Evaluation of Incongruencies and Their Influence on Plausibility in Virtual Reality
, In
2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 894-901
.
IEEE Computer Society
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{brubach2023systematic,
title = {A Systematic Evaluation of Incongruencies and Their Influence on Plausibility in Virtual Reality},
author = {Brübach, Larissa and Westermeier, Franziska and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2023},
pages = {894-901},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ismar-bruebach-a-systematic-evaluation-of-incongruencies-preprint.pdf},
doi = {10.1109/ISMAR59233.2023.00105}
}
Abstract: Currently, there is an ongoing debate about the influencing factors of one's extended reality (XR) experience. Plausibility, congruence, and their role have recently gained more and more attention. One of the latest models to describe XR experiences, the Congruence and Plausibility model (CaP), puts plausibility and congruence right in the center. However, it is unclear what influence they have on the overall XR experience and what influences our perceived plausibility rating. In this paper, we implemented four different incongruencies within a virtual reality scene using breaks in plausibility as an analogy to breaks in presence. These manipulations were either located on the cognitive or perceptual layer of the CaP model. They were also either connected to the task at hand or not. We tested these manipulations in a virtual bowling environment to see which influence they had. Our results show that manipulations connected to the task caused a lower perceived plausibility. Additionally, cognitive manipulations seem to have a larger influence than perceptual manipulations. We were able to cause a break in plausibility with one of our incongruencies. These results show a first direction on how the influence of plausibility in XR can be systematically investigated in the future.
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik,
A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies
, In
Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology
.
New York, NY, USA
:
Association for Computing Machinery
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{westermeier2023virtualized,
title = {A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies},
author = {Westermeier, Franziska and Brübach, Larissa and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology},
year = {2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3611659.3617227},
doi = {10.1145/3611659.3617227}
}
Abstract: When blending virtual and physical content, certain incongruencies emerge from hardware limitations, inaccurate tracking, or different appearances of virtual and physical content. They restrain us from perceiving virtual and physical content as one experience. Hence, it is crucial to investigate these issues to determine how they influence our experience. We present a virtualized augmented reality simulation that can systematically examine single incongruencies or different configurations.
Erik Göbel, Kristof Korwisi, Andrea Bartl, Martin Hennecke, Marc Erich Latoschik,
Algorithmen erleben in VR
, pp. 415-416
.
Bonn
:
Gesellschaft für Informatik e.V.
, 2023.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{inproceedings,
title = {Algorithmen erleben in VR},
author = {Göbel, Erik and Korwisi, Kristof and Bartl, Andrea and Hennecke, Martin and Latoschik, Marc Erich},
year = {2023},
pages = {415--416},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
url = {https://dl.gi.de/items/40798a70-3bec-43f8-8dff-827ab9d4650a},
doi = {10.18420/infos2023-046}
}
Sebastian Oberdörfer, Sandra Birnstiel, Sophia C. Steinhaeusser, Marc Erich Latoschik,
An Approach to Investigate an Influence of Visual Angle Size on Emotional Activation During a Decision-Making Task
, In
Virtual, Augmented and Mixed Reality (HCII 2023)
, pp. 649-664
.
Cham
:
Springer Nature Switzerland
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2023approach,
title = {An Approach to Investigate an Influence of Visual Angle Size on Emotional Activation During a Decision-Making Task},
author = {Oberdörfer, Sebastian and Birnstiel, Sandra and Steinhaeusser, Sophia C. and Latoschik, Marc Erich},
booktitle = {Virtual, Augmented and Mixed Reality (HCII 2023)},
year = {2023},
pages = {649--664},
publisher = {Springer Nature Switzerland},
address = {Cham},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2023-hcii-igt-visual-angles-preprint.pdf},
doi = {10.1007/978-3-031-35634-6_47}
}
Abstract: Decision-making is an important ability in our daily lives. Decision-making can be influenced by emotions. A virtual environment and objects in it might follow an emotional design, thus potentially influ- encing the mood of a user. A higher visual angle on a particular stimulus can lead to a higher emotional response to it. The use of immersive vir- tual reality (VR) surrounds a user visually with a virtual environment, as opposed to the partial immersion of using a normal computer screen. This higher immersion may result in a greater visual angle on a particu- lar stimulus and thus a stronger emotional response to it. In a between- subjects user study, we compare the results of a decision-making task in VR presented in three different visual angles. We used the Iowa Gambling Task (IGT) as task and to detect potential differences in decision-making. The IGT was displayed in one of three dimensions, thus yielding visual angles of 20◦, 35◦, and 50◦. Our results indicate no difference between the three conditions with respect to decision-making. Thus, our results possibly imply that a higher visual angle has no influence on a task that is influenced by emotions but is otherwise cognitive.
Martin Mišiak, Tom Müller, Arnulph Fuhrmann, Marc Erich Latoschik,
An Evaluation of Dichoptic Tonemapping in Virtual Reality Experiences
, In
GI VR/AR Workshop
.
2023.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{misiak2023evaluation,
title = {An Evaluation of Dichoptic Tonemapping in Virtual Reality Experiences},
author = {Mišiak, Martin and Müller, Tom and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {GI VR/AR Workshop},
year = {2023},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-givrar-dichoptic-tonemapping.pdf}
}
Peter Kullmann, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
An Evaluation of Other-Avatar Facial Animation Methods for Social VR
, In
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
, pp. 1-7
.
New York, NY, USA
:
Association for Computing Machinery
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kullmann2023facialExpressionComparison,
title = {An Evaluation of Other-Avatar Facial Animation Methods for Social VR},
author = {Kullmann, Peter and Menzel, Timo and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
year = {2023},
pages = {1--7},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-Kullmann-An-Evaluation-of-Other-Avatar-Facial-Animation-Methods-for-Social-VR.pdf},
doi = {10.1145/3544549.3585617}
}
Abstract: We report a mixed-design study on the effect of facial animation method (static, synthesized, or tracked expressions) and its synchronization to speaker audio (in sync or delayed by the method’s inherent latency) on an avatar’s perceived naturalness and plausibility. We created a virtual human for an actor and recorded his spontaneous half-minute responses to conversation prompts. As a simulated immersive interaction, 44 participants unfamiliar with the actor observed and rated performances rendered with the avatar, each with the different facial animation methods. Half of them observed performances in sync and the others with the animation method’s latency. Results show audio synchronization did not influence ratings and static faces were rated less natural and less plausible than animated faces. Notably, synthesized expressions were rated as more natural and more plausible than tracked expressions. Moreover, ratings of verbal behavior naturalness differed in the same way. We discuss implications of these results for avatar-mediated communication.
Felix Sittner, Oliver Hartmann, Sergio Montenegro, Jan-Philipp Friese, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich,
An Update on the Virtual Mission Control Room
, In
2023 Small Satellite Conference
.
Utah State University, Logan, UT
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{sittner2023update,
title = {An Update on the Virtual Mission Control Room},
author = {Sittner, Felix and Hartmann, Oliver and Montenegro, Sergio and Friese, Jan-Philipp and Brübach, Larissa and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2023 Small Satellite Conference},
year = {2023},
address = {Utah State University, Logan, UT},
url = {https://digitalcommons.usu.edu/smallsat/2023/all2023/193/}
}
Abstract: In 2021 we presented the Virtual Mission Control Room (VMCR) on the verge from fun educational project to testing ground for remote cooperative mission control. Since then, we successfully participated in ESA's 2022 campaign "New ideas to make XR a reality", which granted us additional funding to improve the VMCR software and conduct usability testing in cooperation with the chair of human-computer-interaction. In this paper and the corresponding poster session we give an update on the current state of the project, the new features and project structure. We explain the changes suggested by early test users and ESA to make operators feel more at home in the virtual environment. Subsequently, our project partners present their first suggestions for improvements to the VMCR as well as their plans for user testing. We conclude with lessons learned and and a look ahead into our plans for the future of the project.
Nina Döllinger, Erik Wolf, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness
, In
2023 CHI Conference on Human Factors in Computing Systems
, pp. 1-14
.
2023.
Honorable Mention 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2023embodied,
title = {Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness},
author = {Döllinger, Nina and Wolf, Erik and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2023 CHI Conference on Human Factors in Computing Systems},
year = {2023},
pages = {1-14},
note = {Honorable Mention 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-virtual-mirrors-body-awareness-preprint.pdf},
doi = {10.1145/3544548.3580918}
}
Abstract: Virtual Reality (VR) allows us to replace our visible body with a virtual self-representation (avatar) and to explore its effects on our body perception. While the feeling of owning and controlling a virtual body is widely researched, how VR affects the awareness of internal body signals (body awareness) remains open. Forty participants performed moving meditation tasks in reality and VR, either facing their mirror image or not. Both the virtual environment and avatars photorealistically matched their real counterparts.
We found a negative effect of VR on body awareness, mediated by feeling embodied in and changed by the avatar. Further, we revealed a negative effect of a mirror on body awareness. Our results indicate that assessing body awareness should be essential in evaluating VR designs and avatar embodiment aiming at mental health, as even a scenario as close to reality as possible can distract users from their internal body signals.
Fabian Unruh, David H.V. Vogel, Maximilian Landeck, Jean-Luc Lugrin, Marc Erich Latoschik,
Body and Time: Virtual Embodiment and its effect on Time Perception
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
29
(
5)
, pp. 2626-2636
.
2023.
IEEE VR Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{unruh2023virtual,
title = {Body and Time: Virtual Embodiment and its effect on Time Perception},
author = {Unruh, Fabian and Vogel, David H.V. and Landeck, Maximilian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2023},
volume = {29},
number = {5},
pages = {2626 - 2636},
note = {IEEE VR Best Paper Nominee 🏆},
url = {https://ieeexplore.ieee.org/abstract/document/10049718},
doi = {10.1109/TVCG.2023.3247040}
}
Abstract: This article explores the effect of one’s body representation on time perception. Time Perception is modulated by a variety of factors including, e.g., the current situation or activity, it can display significant disturbances caused by psychological disorders, and it is influenced by emotional and interoceptive states, i.e., ”the sense of the physiological condition of the body”. We investigated this relation between one’s own body and the perception of time in a novel Virtual Reality (VR) experiment explicitly fostering user activity. 36 participants randomly experienced different degrees of embodiment: i) without an avatar (low), ii) with hands (medium), and iii) with a high quality avatar (high). Participants had to repeatedly activate a virtual lamp and estimate the duration of time intervals as well as judge the passage of time. Our results show a significant effect of embodiment on time perception: time passes slower in the low embodiment condition compared to the medium and high conditions. In contrast to prior work, the study provides missing evidence that this effect is independent of the level of activity of participants: In our task users were prompted to repeatedly perform body actions, thereby ruling-out a potential influence of the level of activity. Importantly, duration judgements in both the millisecond and minute ranges seemed unaffected by variations in embodiment. Taken together, these results lead to a better understanding of the relationship between the body and time.
Simon Seibt, Bartosz Von Rymon Lipinski, Thomas Chang, Marc Erich Latoschik,
DFM4SFM - Dense Feature Matching for Structure from Motion
, In
2023 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW)
, pp. 1-5
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10328368,
title = {DFM4SFM - Dense Feature Matching for Structure from Motion},
author = {Seibt, Simon and Von Rymon Lipinski, Bartosz and Chang, Thomas and Latoschik, Marc Erich},
booktitle = {2023 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW)},
year = {2023},
pages = {1-5},
url = {https://ieeexplore.ieee.org/document/10328368},
doi = {10.1109/ICIPC59416.2023.10328368}
}
Abstract: Structure from motion (SfM) is a fundamental task in computer vision and allows recovering the 3D structure of a stationary scene from an image set. Finding robust and accurate feature matches plays a crucial role in the early stages of SfM. So in this work, we propose a novel method for computing image correspondences based on dense feature matching (DFM) using homographic decomposition: The underlying pipeline provides refinement of existing matches through iterative rematching, detection of occlusions and extrapolation of additional matches in critical image areas between image pairs. Our main contributions are improvements of DFM specifically for SfM, resulting in global refinement and global extrapolation of image correspondences between related views. Furthermore, we propose an iterative version of the Delaunay-triangulation-based outlier detection algorithm for robust processing of repeated image patterns. Through experiments, we demonstrate that the proposed method significantlv improves the reconstruction accuracy.
Marie Luisa Fiedler, Erik Wolf, Nina Döllinger, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Embodiment and Personalization for Self-Identification with Virtual Humans
, In
2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 799-800
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{fiedler2023selfidentification,
title = {Embodiment and Personalization for Self-Identification with Virtual Humans},
author = {Fiedler, Marie Luisa and Wolf, Erik and Döllinger, Nina and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2023},
pages = {799-800},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ieeevr-self-identification-preprint.pdf},
doi = {10.1109/VRW58643.2023.00242}
}
Abstract: Our work investigates the impact of virtual human embodiment and personalization on the sense of embodiment (SoE) and self-identification (SI). We introduce preliminary items to query self-similarity (SS) and self-attribution (SA) with virtual humans as dimensions of SI. In our study, 64 participants successively observed personalized and generic-looking virtual humans, either as embodied avatars in a virtual mirror or as agents while performing tasks. They reported significantly higher SoE and SI when facing personalized virtual humans and significantly higher SoE and SA when facing embodied avatars, indicating that both factors have strong separate and complimentary influence on SoE and SI.
Franziska Westermeier, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich,
Exploring Plausibility and Presence in Mixed Reality Experiences
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
29
(
5)
, pp. 2680-2689
.
2023.
IEEE VR Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{westermeier2023exploring,
title = {Exploring Plausibility and Presence in Mixed Reality Experiences},
author = {Westermeier, Franziska and Brübach, Larissa and Latoschik, Marc Erich and Wienrich, Carolin},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2023},
volume = {29},
number = {5},
pages = {2680-2689},
note = {IEEE VR Best Paper Nominee 🏆},
url = {https://ieeexplore.ieee.org/document/10049710},
doi = {10.1109/TVCG.2023.3247046}
}
Abstract: Mixed Reality (MR) applications along Milgram's Reality-Virtuality (RV) continuum motivated a number of recent theories on potential constructs and factors describing MR experiences. This paper investigates the impact of incongruencies on the sensation/perception and cognition layers to provoke breaks in plausibility, and the effects these breaks have on spatial and overall presence as prominent constructs of Virtual Reality (VR). We developed a simulated maintenance application to test virtual electrical devices. Participants performed test operations on these devices in a counterbalanced, randomized 2x2 between-subject design in either VR as congruent, or Augmented Reality (AR) as incongruent on the sensation/perception layer. Cognitive incongruency was induced by the absence of traceable power outages, decoupling perceived cause and effect after activating potentially defective devices. Our results indicate significant differences in the plausibility ratings between the VR and AR conditions, hence between congruent/incongruent conditions on the sensation/perception layer. In addition, spatial presence revealed a comparable interaction pattern with the VR vs AR conditions. Both factors decreased for the AR condition (incongruent sensation/perception) compared to VR (congruent sensation/perception) for the congruent cognitive case but increased for the incongruent cognitive case. The results are discussed and put into perspective in the scope of recent theories of MR experiences.
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
From Clocks to Pendulums: A Study on the Influence of External Moving Objects on Time Perception in Virtual Environments
, In
The 29th ACM Symposium on Virtual Reality Software and Technology (VRST)
, Vol.
29th
, p. 11
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{noauthororeditor,
title = {From Clocks to Pendulums: A Study on the Influence of External Moving Objects on Time Perception in Virtual Environments},
author = {Landeck, Maximilian and Unruh, Fabian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
journal = {The 29th ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2023},
volume = {29th},
pages = {11},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023_vrst_conference_influence_moving_objects_on_time_perception__preprint_version_1.pdf},
doi = {10.1145/3611659.3615703}
}
Abstract: This paper investigates the relationship between perceived object motion and the experience of time in virtual environments. We developed an application to measure how the motion properties of virtual objects and the degree of immersion and embodiment may affect the time experience. A first study (n = 145) was conducted remotely using an online video survey, while a second study (n = 60) was conducted under laboratory conditions in virtual reality (VR). Participants in both studies experienced seven different virtual objects in a randomized order and then answered questions about time experience. The VR study added an "embodiment" condition in which participants were either represented by a virtual full body or lacked any form of virtual body representation.
In both studies, time was judged to pass faster when viewing oscillating motion in immersive and non-immersive settings and independently of the presence or absence of a virtual body. This trend was strongest when virtual pendulums were displayed. Both studies also found a significant inverse correlation between the passage of time and boredom. Our results support the development of applications that manipulate the perception of time in virtual environments for therapeutic use, for instance, for disorders such as depression, autism, and schizophrenia. Disturbances in the perception of time are known to be associated with these disorders.
Carolin Wienrich, Jana Krauss, Lukas Polifke, Viktoria Horn, Arne Bürger, Marc-Erich Latoschik,
Harnessing the Potential of the Metaverse to Counter its Dangers
.
2023.
[BibTeX]
[Download]
[BibSonomy]
@misc{wienrich2023harnessing,
title = {Harnessing the Potential of the Metaverse to Counter its Dangers},
author = {Wienrich, Carolin and Krauss, Jana and Polifke, Lukas and Horn, Viktoria and Bürger, Arne and Latoschik, Marc-Erich},
year = {2023},
url = {}
}
Marie Luisa Fiedler, Erik Wolf, Carolin Wienrich, Marc Erich Latoschik,
Holographic Augmented Reality Mirrors for Daily Self-Reflection on the Own Body Image
, In
CHI 2023 WS28 Integrating Individual and Social Contexts into Self-Reflection Technologies Workshop
, pp. 1-4
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fiedler2023holographic,
title = {Holographic Augmented Reality Mirrors for Daily Self-Reflection on the Own Body Image},
author = {Fiedler, Marie Luisa and Wolf, Erik and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {CHI 2023 WS28 Integrating Individual and Social Contexts into Self-Reflection Technologies Workshop},
year = {2023},
pages = {1-4},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-reflection-workshop.pdf}
}
Abstract: Mirror self-reflection can help us to develop a deeper understanding and appreciation of our body. Due to technological advancements, holographic augmented reality (AR) mirrors can create realistic visualizations of virtual humans that can represent one's appearance in an altered way while remaining in a familiar environment. Further developing those mirrors opens a new field for use in everyday life. In this work, we outline possible future scenarios where AR mirrors can empower individuals to visualize their emotions, thought patterns, and discrepancies related to their physical body and mental body image. Thus, AR mirrors can encourage their self-reflection, promote a positive and healthy relationship with their bodies, or motivate them to take action to improve their well-being.
Samantha Straka, Martin Jakobus Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich,
How Do Employees Imagine AI They Want to Work with: A Drawing Study
, In
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
.
New York, NY, USA
:
Association for Computing Machinery
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3544549.3585631,
title = {How Do Employees Imagine AI They Want to Work with: A Drawing Study},
author = {Straka, Samantha and Koch, Martin Jakobus and Carolus, Astrid and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
year = {2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3544549.3585631},
doi = {10.1145/3544549.3585631}
}
Abstract: Perceptions about AI influence the attribution of characteristics and the interaction with AI. To find out how workers imagine an AI they would like to work with and what characteristics they attribute to it, we asked 174 working individuals to draw an AI they would like to work with, to report five adjectives they associate with their drawing and to evaluate the drawn and three other, typical AI representations (e.g. robot, smartphone) either presented as male or female. Participants mainly drew humanoid or robotic AIs. The adjectives that describe AI mainly referred to the inner characteristics, capabilities, shape, or relationship types. Regarding the evaluation, we identified four dimensions (warmth, competence, animacy, size) that can be reproduced for male and female AIs and different AI representations. This work addresses diverse conceptions of AI in the workplace and shows that human-centered AI development is necessary to address the huge design space.
Maximilian Landeck, Federico Alvarez Igarzábal, Fabian Unruh, Hannah Habenicht, Shiva Khoshnoud, Marc Wittmann, Jean-Luc Lugrin, Marc Erich Latoschik,
Journey Through a Virtual Tunnel: Simulated Motion and its Effects on the Experience of Time
, In
Frontiers in Virtual Reality
, Vol.
3
, p. 195
.
Frontiers
, 2023.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{landeck3journey,
title = {Journey Through a Virtual Tunnel: Simulated Motion and its Effects on the Experience of Time},
author = {Landeck, Maximilian and Alvarez Igarzábal, Federico and Unruh, Fabian and Habenicht, Hannah and Khoshnoud, Shiva and Wittmann, Marc and Lugrin, Jean-Luc and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2023},
volume = {3},
pages = {195},
publisher = {Frontiers},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.1059971/full},
doi = {10.3389/frvir.2022.1059971}
}
Astrid Carolus, Martin J. Koch, Samantha Straka, Marc Erich Latoschik, Carolin Wienrich,
MAILS - Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies
, In
Computers in Human Behavior: Artificial Humans
, Vol.
1
(
2)
, p. 100014
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{carolus2023mails,
title = {MAILS - Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies},
author = {Carolus, Astrid and Koch, Martin J. and Straka, Samantha and Latoschik, Marc Erich and Wienrich, Carolin},
journal = {Computers in Human Behavior: Artificial Humans},
year = {2023},
volume = {1},
number = {2},
pages = {100014},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Carolus-and-Kocch-(2023)-MAILS.pdf},
doi = {10.1016/j.chbah.2023.100014}
}
Abstract: Valid measurement of AI literacy is important for the selection of personnel, identification of shortages in skill and knowledge, and evaluation of AI literacy interventions. A questionnaire is missing that is deeply grounded in the existing literature on AI literacy, is modularly applicable depending on the goals, and includes further psychological competencies in addition to the typical facets of AIL. This paper presents the development and validation of a questionnaire considering the desiderata described above. We derived items to represent different facets of AI literacy and psychological competencies, such as problem-solving, learning, and emotion regulation in regard to AI. We collected data from 300 German-speaking adults to confirm the factorial structure. The result is the Meta AI Literacy Scale (MAILS) for AI literacy with the facets Use & apply AI, Understand AI, Detect AI, and AI Ethics and the ability to Create AI as a separate construct, and AI Self-efficacy in learning and problem-solving and AI Self-management (i.e., AI persuasion literacy and emotion regulation). This study contributes to the research on AI literacy by providing a measurement instrument relying on profound competency models. Psy chological competencies are included particularly important in the context of pervasive change through AI systems.
Jinghuai Lin, Johrine Cronjé, Ivo Käthner, Paul Pauli, Marc Erich Latoschik,
Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
29
(
5)
, p. 2401–2411
.
2023.
IEEE VR Best Paper Honorable Mention 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{lin2023measuring,
title = {Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm},
author = {Lin, Jinghuai and Cronjé, Johrine and Käthner, Ivo and Pauli, Paul and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2023},
volume = {29},
number = {5},
pages = {2401–2411},
note = {IEEE VR Best Paper Honorable Mention 🏆},
url = {https://ieeexplore.ieee.org/document/10049655},
doi = {10.1109/TVCG.2023.3247095}
}
Abstract: Virtual humans, including virtual agents and avatars, play an increasingly important role as VR technology advances. For example, virtual humans are used as proxies of users in social VR or as interfaces for AI assistants in online financing. Interpersonal trust is an essential prerequisite in real-life interactions, as well as in the virtual world. However, to date, there are no established interpersonal trust measurement tools specifically for virtual humans in virtual reality. Previously, a virtual maze task was proposed to measure trust towards virtual characters. For the current study, a variant of the paradigm was implemented. The task of the users (the trustors) is to navigate through a maze in virtual reality, where they can interact with a virtual human (the trustee). They can choose to 1) ask for advice and 2) follow advice from the virtual human if they want to. These measures served as behavioural measures of trust. We conducted a validation study with 70 participants in a between-subject design. The two conditions did not differ in the content of the advice but in the appearance, tone of voice and engagement of the trustees (alleged as avatars controlled by other participants). Results indicate that the experimental manipulation was successful, as participants rated the virtual human as more trustworthy in the trustworthy condition than in the untrustworthy condition. Importantly, this manipulation affected the trust behaviour of our participants, who, in the trustworthy condition, asked for advice more often and followed advice more often, indicating that the paradigm is sensitive to assessing interpersonal trust towards virtual humans. Thus, our paradigm can be used to measure differences in interpersonal trust towards virtual humans and may serve as a valuable research tool to investigate trust in virtual reality.
Christian Rack, Lukas Schach, Marc Latoschik,
Motion Learning Toolbox – A Python library for preprocessing of XR motion tracking data for machine learning applications
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@misc{rack2023motionlearningtoolbox,
title = {Motion Learning Toolbox – A Python library for preprocessing of XR motion tracking data for machine learning applications},
author = {Rack, Christian and Schach, Lukas and Latoschik, Marc},
year = {2023},
url = {https://github.com/cschell/Motion-Learning-Toolbox}
}
Abstract: The Motion Learning Toolbox is a Python library designed to facilitate the preprocessing of motion tracking data in extended reality (XR) setups. It's particularly useful for researchers and engineers wanting to use XR tracking data as input for machine learning models. Originally developed for academic research targeting the identification of XR users by their motions, this toolbox includes a variety of data encoding methods that enhance machine learning model performance.
Jean-Luc Lugrin, Jessica Topel, Yann Glémarec, Birgit Lugrin, Marc Erich Latoschik,
Posture Parameters for Personality-Enhanced Virtual Audiences
, In
Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA)
.
2023.
[BibTeX]
[Download]
[BibSonomy]
@conference{lugrin2023posture,
title = {Posture Parameters for Personality-Enhanced Virtual Audiences},
author = {Lugrin, Jean-Luc and Topel, Jessica and Glémarec, Yann and Lugrin, Birgit and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA)},
year = {2023},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-iva-lugrin-posture-rarameters-for-personality-enhanced-virtual-audiences.pdf}
}
Florian Kern, Marc Erich Latoschik,
Reality Stack I/O: A Versatile and Modular Framework for Simplifying and Unifying XR Applications and Research
, In
2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
, pp. 74-76
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10322199,
title = {Reality Stack I/O: A Versatile and Modular Framework for Simplifying and Unifying XR Applications and Research},
author = {Kern, Florian and Latoschik, Marc Erich},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
year = {2023},
pages = {74-76},
url = {https://ieeexplore.ieee.org/document/10322199},
doi = {10.1109/ISMAR-Adjunct60411.2023.00023}
}
Abstract: This paper introduces Reality Stack I/O (RSIO), a versatile and modular framework designed to facilitate the development of extended reality (XR) applications. Researchers and developers often spend a significant amount of time enabling cross-device and cross-platform compatibility, leading to delays and increased complexity. RSIO provides the essential features to simplify and unify the development of XR applications. It enhances cross-device and cross-platform compatibility, expedites integration, and allows developers to focus more on building XR experiences rather than device integration. We offer a public Unity reference implementation with examples.
Sebastian Oberdörfer, Anne Elsässer, Silke Grafe, Marc Erich Latoschik,
Superfrog: Comparing Learning Outcomes and Potentials of a Worksheet, Smartphone, and Tangible AR Learning Environment
, In
2023 9th International Conference of the Immersive Learning Research Network (iLRN)
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{oberdorfer2023comparing,
title = {Superfrog: Comparing Learning Outcomes and Potentials of a Worksheet, Smartphone, and Tangible AR Learning Environment},
author = {Oberdörfer, Sebastian and Elsässer, Anne and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {2023 9th International Conference of the Immersive Learning Research Network (iLRN)},
year = {2023},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ilrn-horst-learning-preprint.pdf}
}
Abstract: The widespread availability of smartphones facilitates the integration of digital, augmented reality (AR), and tangible augmented reality (TAR) learning environments into the classroom. A haptic aspect can enhance the user’s overall experience during a learning process. To investigate further benefits of using TAR for educational purposes, we compare a TAR and a smartphone learning environment with a traditional worksheet counterpart in terms of learning effectiveness, emotions, motivation, and cognitive load. 64 sixth-grade students from a German high school used one of the three conditions to learn about frog anatomy. We found no significant differences in learning effectiveness and cognitive load. The TAR condition elicited significantly higher positive emotions than the worksheet, but not the smartphone condition. Both digital learning environments elicited significantly higher motivation, in contrast to the worksheet. Thus, our results suggest that smartphone and TAR learning environments are equally beneficial for enhancing learning.
Florian Kern, Florian Niebling, Marc Erich Latoschik,
Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality
, In
IEEE Transactions on Visualization and Computer Graphics
, pp. 2658-2669
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kern2023input,
title = {Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality},
author = {Kern, Florian and Niebling, Florian and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2023},
pages = {2658--2669},
url = {https://ieeexplore.ieee.org/document/10049665/},
doi = {10.1109/TVCG.2023.3247098}
}
Abstract: This article compares two state-of-the-art text input techniques between non-stationary virtual reality (VR) and video see-through augmented reality (VST AR) use-cases as XR display condition. The developed contact-based mid-air virtual tap and wordgesture (swipe) keyboard provide established support functions for text correction, word suggestions, capitalization, and punctuation. A user evaluation with 64 participants revealed that XR displays and input techniques strongly affect text entry performance, while subjective measures are only influenced by the input techniques. We found significantly higher usability and user experience ratings for tap keyboards compared to swipe keyboards in both VR and VST AR. Task load was also lower for tap keyboards. In terms of performance, both input techniques were significantly faster in VR than in VST AR. Further, the tap keyboard was significantly faster than the swipe keyboard in VR. Participants showed a significant learning effect with only ten sentences typed per condition. Our results are consistent with previous work in VR and optical see-through (OST) AR, but additionally provide novel insights into usability and performance of the selected text input techniques for VST AR. The significant differences in subjective and objective measures emphasize the importance of specific evaluations for each possible combination of input techniques and XR displays to provide reusable, reliable, and high-quality text input solutions. With our work, we form a foundation for future research and XR workspaces. Our reference implementation is publicly available to encourage replicability and reuse in future XR workspaces.
David Mal, Erik Wolf, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik,
The Impact of Avatar and Environment Congruence on Plausibility, Embodiment, Presence, and the Proteus Effect in Virtual Reality
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
29
(
5)
, pp. 2358-2368
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{mal2023impact,
title = {The Impact of Avatar and Environment Congruence on Plausibility, Embodiment, Presence, and the Proteus Effect in Virtual Reality},
author = {Mal, David and Wolf, Erik and Döllinger, Nina and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2023},
volume = {29},
number = {5},
pages = {2358-2368},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ieeevr-avatar-and-environment-congruence-preprint.pdf},
doi = {10.1109/TVCG.2023.3247089}
}
Abstract: Many studies show the significance of the Proteus effect for serious virtual reality applications. The present study extends the existing knowledge by considering the relationship (congruence) between the self-embodiment (avatar) and the virtual environment. We investigated the impact of avatar and environment types and their congruence on avatar plausibility, sense of embodiment, spatial presence, and the Proteus effect. In a 2 × 2 between-subjects design, participants embodied either an avatar in sports- or business wear in a semantic congruent or incongruent environment while performing lightweight exercises in virtual reality. The avatar-environment congruence significantly affected the avatar’s plausibility but not the sense of embodiment or spatial presence. However, a significant Proteus effect emerged only for participants who reported a high feeling of (virtual) body ownership, indicating that a strong sense of having and owning a virtual body is key to facilitating the Proteus effect. We discuss the results assuming current theories of bottom-up and top-down determinants of the Proteus effect and thus contribute to understanding its underlying mechanisms and determinants.
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
The Impact of Reflection Approximations on Visual Quality in Virtual Reality
, In
ACM Symposium on Applied Perception 2023 (SAP '23), August
, pp. 1-11
.
ACM
, 2023.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{misiak2023reflectionApprox,
title = {The Impact of Reflection Approximations on Visual Quality in Virtual Reality},
author = {Mišiak, Martin and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {ACM Symposium on Applied Perception 2023 (SAP '23), August},
year = {2023},
pages = {1--11},
publisher = {ACM},
url = {https://doi.org/10.1145/3605495.3605794}
}
Jinghuai Lin, Johrine Cronjé, Ivo Käthner, Paul Pauli, Marc Erich Latoschik,
The Virtual Maze: a Tool for Measuring Trust towards Virtual Humans
, In
Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents
, p. 1–3
.
ACM
, 2023.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{Lin_2023,
title = {The Virtual Maze: a Tool for Measuring Trust towards Virtual Humans},
author = {Lin, Jinghuai and Cronjé, Johrine and Käthner, Ivo and Pauli, Paul and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
year = {2023},
pages = {1–3},
publisher = {ACM},
url = {http://dx.doi.org/10.1145/3570945.3607295},
doi = {10.1145/3570945.3607295}
}
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Time Perception Research in Virtual Reality: Lessons Learned
, In
Mensch und Computer 2023
.
Veröffentlicht durch die Gesellschaft für Informatik e.V. in P. Fröhlich & V. Cobus (Hrsg.)
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{landeck2023perception,
title = {Time Perception Research in Virtual Reality: Lessons Learned},
author = {Landeck, Maximilian and Unruh, Fabian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {Mensch und Computer 2023},
year = {2023},
publisher = {Veröffentlicht durch die Gesellschaft für Informatik e.V. in P. Fröhlich & V. Cobus (Hrsg.)},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/virtual-times/publications/2023_MUC_Landeck_TimePerceptionInVR.pdf},
doi = {10.18420/muc2023-mci-ws05-442}
}
Abstract: In this article, we present a selection of recent studies from our research group that investigated the relationship between time perception and virtual reality (VR). We focus on the influence of avatar embodiment, visual fidelity, motion perception, and body representation. We summarize findings on the impact of these factors on time perception, discuss lessons learned, and implications for future applications.
In a waiting room experiment, the passage of time in VR with an avatar was perceived significantly faster than without an avatar. The passage of time in the real waiting room was not perceived as significantly different from the waiting room in VR with or without an avatar.
In an interactive scenario, the absence of a virtual avatar resulted in a significantly slower perceived passage of time compared to the partial and full-body avatar conditions. High and medium embodiment conditions are assumed to be more plausible and to less different from a real experience.
A virtual tunnel that induced the illusion of self-motion (vection) appeared to contribute to the perceived passage of time and experience of time. This effect was shown to increase with tunnel speed and the number of tunnel segments.
A framework was proposed for the use of virtual zeitgebers along three dimensions (speed, density, synchronicity) to systematically control the experience of time. The body itself, as well as external objects, seem to be addressed by this theory of virtual zeitgebers.
Finally, the standardization of the methodology and future research considerations are discussed.
Yann Glémarec, Jean-Luc Lugrin, Amelie Hörmann, Anne-Gwenn Bosser, Cédric Buche, Marc Erich Latoschik, Norina Lauer,
Towards Virtual Audience Simulation For Speech Therapy
, In
Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA)
.
2023.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{noauthororeditor2023towards,
title = {Towards Virtual Audience Simulation For Speech Therapy},
author = {Glémarec, Yann and Lugrin, Jean-Luc and Hörmann, Amelie and Bosser, Anne-Gwenn and Buche, Cédric and Latoschik, Marc Erich and Lauer, Norina},
booktitle = {Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA)},
year = {2023},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-iva-towards-virtual-audience-simulation-for-speech-therapy-preprint.pdf}
}
Philipp Krop, Sebastian Oberdörfer, Marc Erich Latoschik,
Traversing the Pass: Improving the Knowledge Retention of Serious Games Using a Pedagogical Agent
, In
Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents
.
Würzburg, Germany
:
Association for Computing Machinery
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{krop2023traversing,
title = {Traversing the Pass: Improving the Knowledge Retention of Serious Games Using a Pedagogical Agent},
author = {Krop, Philipp and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
year = {2023},
publisher = {Association for Computing Machinery},
address = {Würzburg, Germany},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-iva-traversing-the-pass.pdf},
doi = {10.1145/3570945.3607360}
}
Abstract: Machine learning is an essential aspect of modern life that many educational institutions incorporate into their curricula. Often, students struggle to grasp how neural networks learn. Teaching these concepts could be assisted with pedagogical agents and serious games, which both have proven helpful for complex topics like engineering. We present "Traversing the Pass," a serious game that utilizes a mentor-like agent to explain the underlying machine learning concepts and provides feedback. We optimized the agent’s design in a pre-study before evaluating its effectiveness compared to a text-only user interface with experts and students. Participants performed better in a second assessment two weeks later if they played the game using the agent. Although criticized as repetitive, the game created an understanding of basic machine learning concepts and achieved high flow values. Our results indicate that agents could be used to enhance the beneficial effects of serious games with improved knowledge retention.
Fabian Kerwagen, Konrad F. Fuchs, Melanie Ullrich, Andreas Schulze, Samantha Straka, Philipp Krop, Marc E. Latoschik, Fabian Gilbert, Andreas Kunz, Georg Fette, Stefan Störk, Maximilian Ertl,
Usability of a Mhealth Solution Using Speech Recognition for Point-of-care Diagnostic Management
, In
Journal of Medical Systems
, Vol.
47
(
18)
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kerwagen2023,
title = {Usability of a Mhealth Solution Using Speech Recognition for Point-of-care Diagnostic Management},
author = {Kerwagen, Fabian and Fuchs, Konrad F. and Ullrich, Melanie and Schulze, Andreas and Straka, Samantha and Krop, Philipp and Latoschik, Marc E. and Gilbert, Fabian and Kunz, Andreas and Fette, Georg and Störk, Stefan and Ertl, Maximilian},
journal = {Journal of Medical Systems},
year = {2023},
volume = {47},
number = {18},
url = {https://link.springer.com/article/10.1007/s10916-022-01896-y},
doi = {10.1007/s10916-022-01896-y}
}
Abstract: The administrative burden for physicians in the hospital can affect the quality of patient care. The Service Center Medical Informatics (SMI) of the University Hospital Würzburg developed and implemented the smartphone-based mobile application (MA) ukw.mobile1 that uses speech recognition for the point-of-care ordering of radiological examinations. The aim of this study was to examine the usability of the MA workflow for the point-of-care ordering of radiological examinations. All physicians at the Department of Trauma and Plastic Surgery at the University Hospital Würzburg, Germany, were asked to participate in a survey including the short version of the User Experience Questionnaire (UEQ-S) and the Unified Theory of Acceptance and Use of Technology (UTAUT). For the analysis of the different domains of user experience (overall attractiveness, pragmatic quality and hedonic quality), we used a two-sided dependent sample t-test. For the determinants of the acceptance model, we employed regression analysis. Twenty-one of 30 physicians (mean age 34\,$\pm$\,8 years, 62\% male) completed the questionnaire. Compared to the conventional desktop application (DA) workflow, the new MA workflow showed superior overall attractiveness (mean difference 2.15\,$\pm$\,1.33), pragmatic quality (mean difference 1.90\,$\pm$\,1.16), and hedonic quality (mean difference 2.41\,$\pm$\,1.62; all p\,$<$\,.001). The user acceptance measured by the UTAUT (mean 4.49\,$\pm$\,0.41; min. 1, max. 5) was also high. Performance expectancy (beta\,=\,0.57, p\,=\,.02) and effort expectancy (beta\,=\,0.36, p\,=\,.04) were identified as predictors of acceptance, the full predictive model explained 65.4\% of the variance. Point-of-care mHealth solutions using innovative technology such as speech-recognition seem to address the users' needs and to offer higher usability in comparison to conventional technology. Implementation of user-centered mHealth innovations might therefore help to facilitate physicians' daily work.
Christian Rack, Konstantin Kobs, Tamara Fernando, Andreas Hotho, Marc Erich Latoschik,
Versatile User Identification in Extended Reality using Pretrained Similarity-Learning
, In
arXiv
, p. arXiv:2302.07517
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@preprint{2023arXiv230207517S,
title = {Versatile User Identification in Extended Reality using Pretrained Similarity-Learning},
author = {Rack, Christian and Kobs, Konstantin and Fernando, Tamara and Hotho, Andreas and Latoschik, Marc Erich},
journal = {arXiv},
year = {2023},
pages = {arXiv:2302.07517},
url = {https://arxiv.org/abs/2302.07517},
doi = {10.48550/arXiv.2302.07517}
}
Abstract: In this paper, we combine the strengths of distance-based and classification-based approaches for the task of identifying extended reality users by their movements. For this we explore an embedding-based model that leverages deep metric learning. We train the model on a dataset of users playing the VR game "Half-Life: Alyx" and conduct multiple experiments and analyses using a state of the art classification-based model as baseline. The results show that the embedding-based method 1) is able to identify new users from non-specific movements using only a few minutes of enrollment data, 2) can enroll new users within seconds, while retraining the baseline approach takes almost a day, 3) is more reliable than the baseline approach when only little enrollment data is available, 4) can be used to identify new users from another dataset recorded with different VR devices.
Altogether, our solution is a foundation for easily extensible XR user identification systems, applicable to a wide range of user motions. It also paves the way for production-ready models that could be used by XR practitioners without the requirements of expertise, hardware, or data for training deep learning models.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Marc Erich Latoschik, Carolin Wienrich,
Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen - Eine Pilotstudie
, In
MedienPädagogik: Zeitschrift für Theorie Und Praxis Der Medienbildung
Miriam Mulders, Josef Buchner, Andreas Dengel, Raphael Zender (Eds.),
(
51)
, pp. 191-213
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{hein2023medienpaed,
title = {Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen - Eine Pilotstudie},
author = {Hein, Rebecca and Steinbock, Jeanine and Eisenmann, Maria and Latoschik, Marc Erich and Wienrich, Carolin},
editor = {Mulders, Miriam and Buchner, Josef and Dengel, Andreas and Zender, Raphael},
journal = {MedienPädagogik: Zeitschrift für Theorie Und Praxis Der Medienbildung},
year = {2023},
number = {51},
pages = {191-213},
url = {https://www.medienpaed.com/article/view/1572},
doi = {https://doi.org/10.21240/mpaed/51/2023.01.18.X.}
}
Abstract: Dieser Beitrag präsentiert die Ergebnisse des Seminarkonzepts und der durchgeführten Begleitforschung zum inter- und transkulturellen Lernen in Virtual Reality (VR). Im Rahmen eines Universitätsseminars entwarfen TEFL-Studierende (Teaching English as a Foreign Language) Unterrichtsstunden für fortgeschrittene Lernende des Faches Englisch der gymnasialen Oberstufe, die sich mit der Entwicklung von Empathie und der Fähigkeit zur Perspektivübernahme in interkulturellen Kommunikations- und Austauschsituationen befassten. Der Fokus des Konzepts lag dabei auf dem Verständnis und der Annahme kultureller Gemeinsamkeiten und Unterschiede. Hier werden beispielhaft zwei der erstellten Unterrichtsentwürfe für VR Interventionen vorgestellt und diskutiert. Begleitend zum Seminar wurden empirische Daten erhoben. Zum einen wurde das Potenzial des InteractionSuitcase (eine Sammlung virtueller Objekte) von den Studierenden bewertet. Zum anderen wurde explorativ eine qualitative Methode zur Messung interkultureller Kompetenz (Autobiography of Intercultural Encounters) getestet. Die Ergebnisse dieser explorativen Pilotstudie zeigen, dass die Interaktion mit dem InteractionSuitcase von den Studierenden als intuitiv und gewinnbringend für die Konzeption von Unterrichtskonzepten bewertet wurde. Dennoch integrierten die Studierenden die Manipulation der virtuellen Avatare häufiger im Gegensatz zum InteractionSuitcase in die Unterrichtskonzepte. Der Beitrag verfolgt das Ziel, das Potenzial von VR für inter- und transkulturelles Lernen im gymnasialen Englischunterricht zu identifizieren.
Tobias Mühling, Isabelle Späth, Joy Backhaus, Nathalie Milke, Sebastian Oberdörfer, Alexander Meining, Marc Erich Latoschik, Sarah König,
Virtual reality in medical emergencies training: benefits, perceived stress, and learning success
, In
Multimedia Systems
, Vol.
29
(
4)
, p. 2239–2252
.
Springer Nature
, 2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{muhling:2023,
title = {Virtual reality in medical emergencies training: benefits, perceived stress, and learning success},
author = {Mühling, Tobias and Späth, Isabelle and Backhaus, Joy and Milke, Nathalie and Oberdörfer, Sebastian and Meining, Alexander and Latoschik, Marc Erich and König, Sarah},
journal = {Multimedia Systems},
year = {2023},
volume = {29},
number = {4},
pages = {2239–2252},
publisher = {Springer Nature},
url = {},
doi = {https://doi.org/10.21203/rs.3.rs-2197674/v2}
}
Abstract: Medical graduates lack procedural skills experience required to manage emergencies. Recent advances in virtual reality (VR)technology enable the creation of highly immersive learning environments representing easy-to-use and affordable solutionsfor training with simulation. However, the feasibility in compulsory teaching, possible side effects of immersion, perceivedstress, and didactic benefits have to be investigated systematically. VR-based training sessions using head-mounted displaysalongside a real-time dynamic physiology system were held by student assistants for small groups followed by debriefing witha tutor. In the pilot study, 36 students rated simulation sickness. In the main study, 97 students completed a virtual scenarioas active participants (AP) and 130 students as observers (OBS) from the first-person perspective on a monitor. Participantscompleted questionnaires for evaluation purposes and exploratory factor analysis was performed on the items. The extent ofsimulation sickness remained low to acceptable among participants of the pilot study. In the main study, students valued therealistic environment and guided practical exercise. AP perceived the degree of immersion as well as the estimated learn-ing success to be greater than OBS and proved to be more motivated post training. With respect to AP, the factor “sense ofcontrol” revealed a typical inverse U-shaped relationship to the scales “didactic value” and “individual learning benefit”.Summing up, curricular implementation of highly immersive VR-based training of emergencies proved feasible and founda high degree of acceptance among medical students. This study also provides insights into how different conceptions of perceived stress distinctively moderate subjective learning success.
Tobias Mühling, Isabelle Späth, Joy Backhaus, Nathalie Milke, Sebastian Oberdörfer, Alexander Meining, Marc Erich Latoschik, Sarah König,
Virtual reality in medical emergencies training: benefits, perceived stress, and learning success
, In
Multimedia System
, Vol.
29
(
4)
, p. 2239–2252
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{muhling2023virtual,
title = {Virtual reality in medical emergencies training: benefits, perceived stress, and learning success},
author = {Mühling, Tobias and Späth, Isabelle and Backhaus, Joy and Milke, Nathalie and Oberdörfer, Sebastian and Meining, Alexander and Latoschik, Marc Erich and König, Sarah},
journal = {Multimedia System},
year = {2023},
volume = {29},
number = {4},
pages = {2239–2252},
url = {https://link.springer.com/article/10.1007/s00530-023-01102-0},
doi = {10.1007/s00530-023-01102-0}
}
Abstract: Medical graduates lack procedural skills experience required to manage emergencies. Recent advances in virtual reality (VR) technology enable the creation of highly immersive learning environments representing easy-to-use and affordable solutions for training with simulation. However, the feasibility in compulsory teaching, possible side effects of immersion, perceived stress, and didactic benefits have to be investigated systematically. VR-based training sessions using head-mounted displays alongside a real-time dynamic physiology system were held by student assistants for small groups followed by debriefing with a tutor. In the pilot study, 36 students rated simulation sickness. In the main study, 97 students completed a virtual scenario as active participants (AP) and 130 students as observers (OBS) from the first-person perspective on a monitor. Participants completed questionnaires for evaluation purposes and exploratory factor analysis was performed on the items. The extent of simulation sickness remained low to acceptable among participants of the pilot study. In the main study, students valued the realistic environment and guided practical exercise. AP perceived the degree of immersion as well as the estimated learning success to be greater than OBS and proved to be more motivated post training. With respect to AP, the factor “sense of control” revealed a typical inverse U-shaped relationship to the scales “didactic value” and “individual learning benefit”. Summing up, curricular implementation of highly immersive VR-based training of emergencies proved feasible and found a high degree of acceptance among medical students. This study also provides insights into how different conceptions of perceived stress distinctively moderate subjective learning success.
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik,
Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR
, In
2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 502-506
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10108564,
title = {Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR},
author = {Kern, Florian and Tschanter, Jonathan and Latoschik, Marc Erich},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2023},
pages = {502-506},
url = {https://ieeexplore.ieee.org/document/10108564/},
doi = {10.1109/VRW58643.2023.00109}
}
Abstract: The alignment of virtual to physical surfaces is essential to improve symbolic input and selection in XR. Previous techniques optimized for efficiency can lead to inaccuracies. We investigate regression-based refinement techniques and introduce a surface accuracy eval-uation. The results revealed that refinement techniques can highly improve surface accuracy and show that accuracy depends on the gesture shape and surface dimension. Our reference implementation and dataset are publicly available.
Jinghuai Lin, Johrine Cronjé, Carolin Wienrich, Paul Pauli, Marc Erich Latoschik,
Visual Indicators Representing Avatars' Authenticity in Social Virtual Reality and Their Impacts on Perceived Trustworthiness
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
29
(
11)
, pp. 4589-4599
.
2023.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{lin2023visual,
title = {Visual Indicators Representing Avatars' Authenticity in Social Virtual Reality and Their Impacts on Perceived Trustworthiness},
author = {Lin, Jinghuai and Cronjé, Johrine and Wienrich, Carolin and Pauli, Paul and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2023},
volume = {29},
number = {11},
pages = {4589-4599},
url = {https://ieeexplore.ieee.org/document/10269746},
doi = {10.1109/TVCG.2023.3320234}
}
Christian Rack, Tamara Fernando, Murat Yalcin, Andreas Hotho, Marc Erich Latoschik,
Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR
, In
Frontiers in Virtual Reality
David Swapp (Ed.),
, Vol.
4
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{rack2023behavioral,
title = {Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR},
author = {Rack, Christian and Fernando, Tamara and Yalcin, Murat and Hotho, Andreas and Latoschik, Marc Erich},
editor = {Swapp, David},
journal = {Frontiers in Virtual Reality},
year = {2023},
volume = {4},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2023.1272234/full},
doi = {10.3389/frvir.2023.1272234}
}
Abstract: This article presents a new dataset containing motion and physiological data of users playing the game 'Half-Life: Alyx'. The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx
David Fernes, Sebastian Oberdörfer, Marc Erich Latoschik,
Work, Trade, Learn: Developing an Immersive Serious Game for History Education
, In
2023 9th International Conference of the Immersive Learning Research Network (iLRN)
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fernes2023trade,
title = {Work, Trade, Learn: Developing an Immersive Serious Game for History Education},
author = {Fernes, David and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {2023 9th International Conference of the Immersive Learning Research Network (iLRN)},
year = {2023},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ilrn-worklearntrade-preprint.pdf}
}
Abstract: History education often struggles with a lack of interest from students. Serious games can help make learning about history more engaging. Students can directly experience situations of the past as well as interact and communicate with agents representing people of the respective era. This allows for situated learning. Besides using computer screens, the gameplay can also be experienced using immersive Virtual Reality (VR). VR adds an additional spatial level and can further increase the engagement as well as vividness. To investigate the benefits of using VR for serious games targeting the learning of history, we developed a serious game for desktop-3D and VR. Our serious game puts a player into the role of a medieval miller’s apprentice. Following a situated learning approach, the learner operates a mill and interacts with several other characters. These agents discuss relevant facts of the medieval life, thus enabling the construction of knowledge about the life in medieval towns. An evaluation showed that the game in general was successful in increasing the user’s knowledge about the covered topics as well as their topic interest in history. Whether the immersive VR or the simple desktop version of the application was used did not have any influence on these results. Additional feedback was gathered to improve the game further in the future.
Nina Döllinger, Matthias Beck, Erik Wolf, David Mal, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
“If It’s Not Me It Doesn’t Make a Difference” – The Impact of Avatar Personalization on User Experience and Body Awareness in Virtual Reality
, In
2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 483-492
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2023doesnt,
title = {“If It’s Not Me It Doesn’t Make a Difference” – The Impact of Avatar Personalization on User Experience and Body Awareness in Virtual Reality},
author = {Döllinger, Nina and Beck, Matthias and Wolf, Erik and Mal, David and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2023},
pages = {483-492},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ismar-impact-of-avatar-appearance-preprint.pdf},
doi = {10.1109/ISMAR59233.2023.00063}
}
Abstract: Body awareness is relevant for the efficacy of psychotherapy. However, previous work on virtual reality (VR) and avatar-assisted therapy has often overlooked it. We investigated the effect of avatar individualization on body awareness in the context of VR-specific user experience, including sense of embodiment (SoE), plausibility, and sense of presence (SoP). In a between-subject design, 86 participants embodied three avatar types and engaged in VR movement exercises. The avatars were (1) generic and gender-matched, (2) customized from a set of pre-existing options, or (3) personalized photorealistic scans. Compared to the other conditions, participants with personalized avatars reported increased SoE, yet higher eeriness and reduced body awareness. Further, SoE and SoP positively correlated with body awareness across conditions. Our results indicate that VR user experience and body awareness do not always dovetail and do not necessarily predict each other. Future research should work towards a balance between body awareness and SoE.
2022
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases
, In
CHI Conference on Human Factors in Computing Systems Extended Abstracts
.
New York, NY, USA
:
Association for Computing Machinery
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3491101.3503552,
title = {A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-chi-case-study-mmi-zimmerer.pdf},
doi = {10.1145/3491101.3503552}
}
Abstract: Multimodal Interfaces (MMIs) supporting the synergistic use of natural modalities like speech and gesture have been conceived as promising for spatial or 3D interactions, e.g., in Virtual, Augmented, and Mixed Reality (XR for short). Yet, the currently prevailing user interfaces are unimodal. Commercially available software platforms like the Unity or Unreal game engines simplify the complexity of developing XR applications through appropriate tool support. They provide ready-to-use device integration, e.g., for 3D controllers or motion tracking, and according interaction techniques such as menus, (3D) point-and-click, or even simple symbolic gestures to rapidly develop unimodal interfaces. A comparable tool support is yet missing for multimodal solutions in this and similar areas. We believe that this hinders user-centered research based on rapid prototyping of MMIs, the identification and formulation of practical design guidelines, the development of killer applications highlighting the power of MMIs, and ultimately a widespread adoption of MMIs. This article investigates potential reasons for the ongoing uncommonness of MMIs. Our case study illustrates and analyzes lessons learned during the development and application of a toolchain that supports rapid development of natural and synergistic MMIs for XR use-cases. We analyze the toolchain in terms of developer usability, development time, and MMI customization. This analysis is based on the knowledge gained in years of research and academic education. Specifically, it reflects on the development of appropriate MMI tools and their application in various demo use-cases, in user-centered research, and in the lab work of a mandatory MMI course of an HCI master’s program. The derived insights highlight successful choices made as well as potential areas for improvement.
Chiara Palmisano, Peter Kullmann, Ibrahem Hanafi, Marta Verrecchia, Marc Erich Latoschik, Andrea Canessa, Martin Fischbach, Ioannis Ugo Isaias,
A Fully-Immersive Virtual Reality Setup to Study Gait Modulation
, In
Frontiers in Human Neuroscience
, Vol.
16
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/fnhum.2022.783452,
title = {A Fully-Immersive Virtual Reality Setup to Study Gait Modulation},
author = {Palmisano, Chiara and Kullmann, Peter and Hanafi, Ibrahem and Verrecchia, Marta and Latoschik, Marc Erich and Canessa, Andrea and Fischbach, Martin and Isaias, Ioannis Ugo},
journal = {Frontiers in Human Neuroscience},
year = {2022},
volume = {16},
url = {https://www.frontiersin.org/article/10.3389/fnhum.2022.783452},
doi = {10.3389/fnhum.2022.783452}
}
Abstract: Objective: Gait adaptation to environmental challenges is fundamental for independent and safe community ambulation. The possibility of precisely studying gait modulation using standardized protocols of gait analysis closely resembling everyday life scenarios is still an unmet need.Methods: We have developed a fully-immersive virtual reality (VR) environment where subjects have to adjust their walking pattern to avoid collision with a virtual agent (VA) crossing their gait trajectory. We collected kinematic data of 12 healthy young subjects walking in real world (RW) and in the VR environment, both with (VR/A+) and without (VR/A-) the VA perturbation. The VR environment closely resembled the RW scenario of the gait laboratory. To ensure standardization of the obstacle presentation the starting time speed and trajectory of the VA were defined using the kinematics of the participant as detected online during each walking trial.Results: We did not observe kinematic differences between walking in RW and VR/A-, suggesting that our VR environment per se might not induce significant changes in the locomotor pattern. When facing the VA all subjects consistently reduced stride length and velocity while increasing stride duration. Trunk inclination and mediolateral trajectory deviation also facilitated avoidance of the obstacle.Conclusions: This proof-of-concept study shows that our VR/A+ paradigm effectively induced a timely gait modulation in a standardized immersive and realistic scenario. This protocol could be a powerful research tool to study gait modulation and its derangements in relation to aging and clinical conditions.
Nina Döllinger, Christopher Göttfert, Erik Wolf, David Mal, Marc Erich Latoschik, Carolin Wienrich,
Analyzing Eye Tracking Data in Mirror Exposure
, In
2022 Conference on Mensch und Computer
, p. 513–517
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2022eyetracking,
title = {Analyzing Eye Tracking Data in Mirror Exposure},
author = {Döllinger, Nina and Göttfert, Christopher and Wolf, Erik and Mal, David and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2022 Conference on Mensch und Computer},
year = {2022},
pages = {513–517},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-muc-eyetracking_in_mirror_exposition-preprint.pdf},
doi = {10.1145/3543758.3547567}
}
Abstract: Mirror exposure is an important method in the treatment of body image disturbances. Eye tracking can support the unaffected assessment of attention biases during mirror exposure. However, the analysis of eye tracking data in mirror exposure comes with various difficulties and is associated with a high manual workload during data processing. We present an automated data processing framework that enables us to determine any body part as an area of interest without placing markers on the bodies of participants. A short, formative user study proved the quality compared to the gold standard. The automatic processing and openness for different systems allow a broad range of applications.
René Stingl, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Are You Referring to Me? - Giving Virtual Objects Awareness
, In
2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
, pp. 671-673
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{9974498,
title = {Are You Referring to Me? - Giving Virtual Objects Awareness},
author = {Stingl, René and Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
year = {2022},
pages = {671-673},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-natural-pointing-preprint.pdf},
doi = {10.1109/ISMAR-Adjunct57072.2022.00139}
}
Abstract: This work introduces an interaction technique to determine the user’s non-verbal deixis in Virtual Reality (VR) applications. We tailored it for multimodal speech & gesture interfaces (MMIs). Here, non-verbal deixis is often determined by the use of ray-casting due to its simplicity and intuitiveness. However, ray-casting’s rigidness and dichotomous nature pose limitations concerning the MMI’s flexibility and efficiency. In contrast, our technique considers a more comprehensive set of directional cues to determine non-verbal deixis and provides probabilistic output to tackle these limitations. We present a machine-learning-based reference implementation of our technique in VR and the results of a first performance benchmark. Future work includes an in-depth user study evaluating our technique’s user experience in an MMI.
Timo Menzel, Mario Botsch, Marc Erich Latoschik,
Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones
, In
Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22)
(
22)
.
New York, NY, USA
:
Association for Computing Machinery
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{menzel2022automated,
title = {Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones},
author = {Menzel, Timo and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22)},
year = {2022},
number = {22},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3562939.3565622},
doi = {10.1145/3562939.3565622}
}
Abstract: Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.
Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility
, In
IEEE Transactions on Visualization and Computer Graphics
, Vol.
28
(
5)
, pp. 2267-2276
.
2022.
IEEE VR Best Journal Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{9714117,
title = {Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility},
author = {Brübach, Larissa and Westermeier, Franziska and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2022},
volume = {28},
number = {5},
pages = {2267-2276},
note = {IEEE VR Best Journal Paper Nominee 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-breaking-plausibility-without-breaking-presence.pdf},
doi = {10.1109/TVCG.2022.3150496}
}
Abstract: A novel theoretical model recently introduced coherence and plausibility as the essential conditions of XR experiences, challenging contemporary presence-oriented concepts. This article reports on two experiments validating this model, which assumes coherence activation on three layers (cognition, perception, and sensation) as the potential sources leading to a condition of plausibility and from there to other XR qualia such as presence or body ownership. The experiments introduce and utilize breaks in plausibility (in analogy to breaks in presence): We induce incoherence on the perceptual and the cognitive layer simultaneously by a simulation of object behaviors that do not conform to the laws of physics, i.e., gravity. We show that this manipulation breaks plausibility and hence confirm that it results in the desired effects in the theorized condition space but that the breaks in plausibility did not affect presence. In addition, we show that a cognitive manipulation by a storyline framing is too weak to successfully counteract the strong bottom-up inconsistencies. Both results are in line with the predictions of the recently introduced three-layer model of coherence and plausibility, which incorporates well-known top-down and bottom-up rivalries and its theorized increased independence between plausibility and presence.
Christian Seufert, Sebastian Oberdörfer, Alice Roth, Silke Grafe, Jean-Luc Lugrin, Marc Erich Latoschik,
Classroom management competency enhancement for student teachers using a fully immersive virtual classroom
, In
Computers & Education
, Vol.
179
, p. 104410
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{seufert:2022n,
title = {Classroom management competency enhancement for student teachers using a fully immersive virtual classroom},
author = {Seufert, Christian and Oberdörfer, Sebastian and Roth, Alice and Grafe, Silke and Lugrin, Jean-Luc and Latoschik, Marc Erich},
journal = {Computers & Education},
year = {2022},
volume = {179},
pages = {104410},
url = {https://www.sciencedirect.com/science/article/pii/S0360131521002876},
doi = {https://doi.org/10.1016/j.compedu.2021.104410}
}
Abstract: The purpose of this study is to examine whether pre-service teachers studying classroom management (CM) in a virtual reality (VR)-supported setting enhance their CM competencies more than students do in a setting using conventional methods. With this aim in mind and to address a lack of available situations of practicing and reflecting CM competencies besides simply gaining theoretical knowledge about CM in education, we integrated a novel fully immersive VR application in selected CM courses. We evaluated the development of self-assessed and instructor-rated CM competencies and the learning quality in the different learning conditions. Additionally, we evaluated the presence, social presence, believability and utility of the VR application and the VR-assisted and video-assisted course. Participants were pre-service teachers (n = 55) of the University of Würzburg who participated in a quasi-experimental pre-test/post-test intervention. The students were randomly assigned to one of two intervention groups: the test group used the virtual classroom Breaking Bad Behaviors (BBB) during the term (n = 39), and the comparison group's learning was video-assisted (n = 16). The instructor rating shows significant differences between the VR group and the video group, the two points of measurement and for the interaction between the condition and time of measurement. It demonstrates a highly significant improvement in CM competencies in the VR setting between the pre-test and post-test (p < 0.001, Cohen's d = 1.06) as compared to the video-based setting. The participants of the VR setting themselves rated their CM competencies in the post-test significantly higher than in the pre-test (p = 0.02, Cohen's d = 0.39). Interestingly, the video group also rated themselves better in the post-test (p = 0.02, Cohen's d = 0.67), which reveals that self-assessment and external assessment show different results. In addition, we observed that even if both groups gained similar theoretical knowledge, the CM competencies developed to a greater degree in the VR-based settings. The participants rated the CM training system a useful tool to evaluate and reflect on individual teacher actions. Its immersion contributes to a high presence and the simulation of realistic scenarios in a CM course. These findings suggest that VR-based settings can lead to a higher benefit in the enhancement of pre-service teachers' CM competencies.
Christian Rack, Andreas Hotho, Marc Erich Latoschik,
Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences
, In
Proceedings of the IEEE International conference on artificial intelligence & Virtual Reality (IEEE AIVR)
.
IEEE
, 2022.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{schell2022comparison,
title = {Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences},
author = {Rack, Christian and Hotho, Andreas and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE International conference on artificial intelligence & Virtual Reality (IEEE AIVR)},
year = {2022},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeeaivr-schell-comparison-of-data-representations-and-machine-learning-architectures-for-user-identification-on-arbitrary-motion-sequences.pdf},
doi = {10.1109/AIVR56993.2022.00010}
}
Marc Erich Latoschik, Carolin Wienrich,
Congruence and Plausibility, not Presence?! Pivotal Conditions for XR Experiences and Effects, a Novel Model
, In
Frontiers in Virtual Reality
, Vol.
3:694433
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{latoschik2021coherence,
title = {Congruence and Plausibility, not Presence?! Pivotal Conditions for XR Experiences and Effects, a Novel Model},
author = {Latoschik, Marc Erich and Wienrich, Carolin},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3:694433},
url = {https://www.frontiersin.org/article/10.3389/frvir.2022.694433},
doi = {10.3389/frvir.2022.694433}
}
Abstract: Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Cedric Buche, Marc Erich Latoschik,
Controlling the STAGE: A High-Level Control System for Virtual Audiences In Virtual Reality
, In
Frontiers in Virtual Reality – Virtual Reality and Human Behaviour
, Vol.
3
, p. 2673–4192
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{glemarec2022controlling,
title = {Controlling the STAGE: A High-Level Control System for Virtual Audiences In Virtual Reality},
author = {Glémarec, Yann and Lugrin, Jean-Luc and Bosser, Anne-Gwenn and Buche, Cedric and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality – Virtual Reality and Human Behaviour},
year = {2022},
volume = {3},
pages = {2673–4192},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.876433/abstract}
}
Abstract: This article presents a novel method for controlling a virtual audience system (VAS) in Virtual Reality (VR) application, called STAGE, which has been originally designed for supervised public speaking training in university seminars dedicated to the preparation and delivery of scientific talks.
We are interested in creating pedagogical narratives: narratives encompass affective phenomena and rather than organizing events changing the course of a training scenario, pedagogical plans using our system focus on organizing the affects it arouses for the trainees.
Efficiently controlling a virtual audience towards a specific training objective while evaluating the speaker's performance presents a challenge for a seminar instructor: the high level of cognitive and physical demands required to be able to control the virtual audience, whilst evaluating speaker's performance, adjusting and allowing it to quickly react to the user’s behaviors and interactions. It is indeed a critical limitation of a number of existing systems that they rely on a Wizard of Oz approach, where the tutor drives the audience in reaction to the user’s performance. We address this problem by integrating with a VAS a high-level control component for tutors, which allows using predefined audience behavior rules, defining custom ones, as well as intervening during run-time for finer control of the unfolding of the pedagogical plan. At its core, this component offers a tool to program, select, modify and monitor interactive training narratives using a high-level representation.
The STAGE offers the following features: i) a high-level API to program pedagogical narratives focusing on a specific public speaking situation and training objectives, ii) an interactive visualization interface iii) computation and visualization of user metrics, iv) a semi-autonomous virtual audience composed of virtual spectators with automatic reactions to the speaker and surrounding spectators while following the pedagogical plan V) and the possibility for the instructor to embody a virtual spectator to ask questions or guide the speaker from within the Virtual Environment. We present here the design, implementation of the tutoring system and its integration in STAGE, and discuss its reception by end-users.
Christian Krupitzer, Jens Naber, Jan-Philipp Stauffert, Jan Mayer, Jan Spielmann, Paul Ehmann, Noel Boci, Maurice Bürkle, André Ho, Clemens Komorek, Felix Heinickel, Samuel Kounev, Christian Becker, Marc Erich Latoschik,
CortexVR: Immersive analysis and training of cognitive executive functions of soccer players using virtual reality and machine learning
, In
Frontiers in Psychology
, Vol.
13
.
Frontiers
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{krupitzer2022cortexvr,
title = {CortexVR: Immersive analysis and training of cognitive executive functions of soccer players using virtual reality and machine learning},
author = {Krupitzer, Christian and Naber, Jens and Stauffert, Jan-Philipp and Mayer, Jan and Spielmann, Jan and Ehmann, Paul and Boci, Noel and Bürkle, Maurice and Ho, André and Komorek, Clemens and Heinickel, Felix and Kounev, Samuel and Becker, Christian and Latoschik, Marc Erich},
journal = {Frontiers in Psychology},
year = {2022},
volume = {13},
publisher = {Frontiers},
url = {https://www.frontiersin.org/articles/10.3389/fpsyg.2022.754732/full},
doi = {https://doi.org/10.3389/fpsyg.2022.754732}
}
Abstract: Goal: This paper presents an immersive Virtual Reality (VR) system to analyze and train Executive Functions (EFs) of soccer players. EFs are important cognitive functions for athletes. They are a relevant quality that distinguishes amateurs from professionals.
Method: The system is based on immersive technology, hence, the user interacts naturally and experiences a training session in a virtual world. The proposed system has a modular design supporting the extension of various so-called game modes. Game modes combine selected game mechanics with specific simulation content to target particular training aspects. The system architecture decouples selection/parameterization and analysis of training sessions via a coaching app from an Unity3D-based VR simulation core. Monitoring of user performance and progress is recorded by a database that sends the necessary feedback to the coaching app for analysis. Results: The system is tested for VR-critical performance criteria to reveal the usefulness of a new interaction paradigm in the cognitive training and analysis of EFs. Subjective ratings for overall usability show that the design as VR application enhances the user experience compared to a traditional desktop app; whereas the new, unfamiliar interaction paradigm does not negatively impact the effort for using the application.
Conclusion: The system can provide immersive training of EF in a fully virtual environment, eliminating potential distraction. It further provides an easy-to-use analyzes tool to compare user but also an automatic, adaptive training mode.
Christian Rack, Fabian Sieper, Lukas Schach, Murat Yalcin, Marc E. Latoschik,
Dataset: Who is Alyx? (GitHub Repository)
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@dataset{who_is_alyx_2022,
title = {Dataset: Who is Alyx? (GitHub Repository)},
author = {Rack, Christian and Sieper, Fabian and Schach, Lukas and Yalcin, Murat and Latoschik, Marc E.},
year = {2022},
url = {https://github.com/cschell/who-is-alyx},
doi = {10.5281/zenodo.6472417}
}
Abstract: This dataset contains over 110 hours of motion, eye-tracking and physiological data from 71 players of the virtual reality game “Half-Life: Alyx”. Each player played the game on two separate days for about 45 minutes using a HTC Vive Pro.
Simon Seibt, Bartosz von Ramon Lipinski, March Erich Latoschik,
Dense Feature Matching based on Homographic Decomposition
, In
IEEE Access
, Vol.
X
, pp. 1-1
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{seibt2022dense,
title = {Dense Feature Matching based on Homographic Decomposition},
author = {Seibt, Simon and von Ramon Lipinski, Bartosz and Latoschik, March Erich},
journal = {IEEE Access},
year = {2022},
volume = {X},
pages = {1-1},
url = {https://ieeexplore.ieee.org/document/9716106},
doi = {10.1109/ACCESS.2022.3152539}
}
Abstract: Finding robust and accurate feature matches is a fundamental problem in computer vision. However, incorrect correspondences and suboptimal matching accuracies lead to significant challenges for many real-world applications. In conventional feature matching, corresponding features in an image pair are greedily searched using their descriptor distance. The resulting matching set is then typically used as input for geometric model fitting methods to find an appropriate fundamental matrix and filter out incorrect matches. Unfortunately, this basic approach cannot solve all practical problems, such as fundamental matrix degeneration, matching ambiguities caused by repeated patterns and rejection of initially mismatched features without further reconsideration. In this paper we introduce a novel matching pipeline, which addresses all of the aforementioned challenges at once: First, we perform iterative rematching to give mismatched feature points a further chance for being considered in later processing steps. Thereby, we are searching for inliers that exhibit the same homographic transformation per iteration. The resulting homographic decomposition is used for refining matches, occlusion detection (e.g. due to parallaxes) and extrapolation of additional features in critical image areas. Furthermore, Delaunay triangulation of the matching set is utilized to minimize the repeated pattern problem and to implement focused matching. Doing so, enables us to further increase matching quality by concentrating on local image areas, defined by the triangular mesh. We present and discuss experimental results with multiple real-world matching datasets. Our contributions, besides improving matching recall and precision for image processing applications in general, also relate to use cases in image-based computer graphics.
Jinghuai Lin, Marc Erich Latoschik,
Digital body, identity and privacy in social virtual reality: A systematic review
, In
Frontiers in Virtual Reality
, Vol.
3
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{lin2022digital,
title = {Digital body, identity and privacy in social virtual reality: A systematic review},
author = {Lin, Jinghuai and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.974652},
doi = {10.3389/frvir.2022.974652}
}
Abstract: Social Virtual Reality (social VR or SVR) provides digital spaces for diverse human activities, social interactions, and embodied face-to-face encounters. While our digital bodies in SVR can in general be of almost any conceivable appearance, individualized or even personalized avatars bearing users’ likeness recently became an interesting research topic. Such digital bodies show a great potential to enhance the authenticity of social VR citizens and increase the trustworthiness of interpersonal interaction. However, using such digital bodies might expose users to privacy and identity issues such as identity theft: For instance, how do we know whether the avatars we encounter in the virtual world are who they claim to be? Safeguarding users’ identities and privacy, and preventing harm from identity infringement, are crucial to the future of social VR. This article provides a systematic review on the protection of users’ identity and privacy in social VR, with a specific focus on digital bodies. Based on 814 sources, we identified and analyzed 49 papers that either: 1) discuss or raise concerns about the addressed issues, 2) provide technologies and potential solutions for protecting digital bodies, or 3) examine the relationship between the digital bodies and users of social VR citizens. We notice a severe lack of research and attention on the addressed topic and identify several research gaps that need to be filled. While some legal and ethical concerns about the potential identity issues of the digital bodies have been raised, and despite some progress in specific areas such as user authentication has been made, little research has proposed practical solutions. Finally, we suggest potential future research directions for digital body protection and include relevant research that might provide insights. We hope this work could provide a good overview of the existing discussion, potential solutions, and future directions for researchers with similar concerns. We also wish to draw attention to identity and privacy issues in social VR and call for interdisciplinary collaboration.
Erik Wolf, Nina Döllinger, David Mal, Stephan Wenninger, Andrea Bartl, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Does Distance Matter? Embodiment and Perception of Personalized Avatars in Relation to the Self-Observation Distance in Virtual Reality
, In
Frontiers in Virtual Reality
, Vol.
3
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{wolf2022distance,
title = {Does Distance Matter? Embodiment and Perception of Personalized Avatars in Relation to the Self-Observation Distance in Virtual Reality},
author = {Wolf, Erik and Döllinger, Nina and Mal, David and Wenninger, Stephan and Bartl, Andrea and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-frontiers-self-observation-distance.pdf},
doi = {10.3389/frvir.2022.1031093}
}
Abstract: Virtual reality applications employing avatar embodiment typically use virtual mirrors to allow users to perceive their digital selves not only from a first-person perspective but also from a holistic third-person view. However, due to distance-related biases such as the distance compression effect or a reduced relative rendering resolution, the self-observation distance (SOD) between the user and the virtual mirror might influence how users perceive their embodied avatar. Our article systematically investigates the effects of a short (1 meter), middle (2.5 meter), and far (4 meter) SOD between user and mirror on the perception of personalized and self-embodied avatars. The avatars were photorealistic reconstructed using state-of-the-art photogrammetric methods. Thirty participants were repeatedly exposed to their real-time animated self-embodied avatars in each of the three SOD conditions. In each condition, the personalized avatars were repeatedly altered in their body weight, and participants were asked to judge the (1) sense of embodiment, (2) body weight perception, and (3) affective appraisal towards their avatar. We found that the different SODs are unlikely to influence any of our measures except for the perceived body weight estimation difficulty. Here, the participants judged the difficulty significantly higher for the farthest SOD. We further found that the participants' self-esteem significantly impacted their ability to modify their avatar's body weight to their current body weight and that it positively correlated with the perceived attractiveness of the avatar. Additionally, the participants' concerns about their body shape affected how eerie they perceived their avatars. Both measures influenced the perceived body weight estimation difficulty. For practical application, we conclude that the virtual mirror in embodiment scenarios can be freely placed and varied at a distance of one to four meters from the user without expecting major effects on the perception of the avatar.
Sebastian Oberdörfer, David Schraudt, Marc Erich Latoschik,
Embodied Gambling–Investigating the Influence of Level of Embodiment, Avatar Appearance, and Virtual Environment Design on an Online VR Slot Machine
, In
Frontiers in Virtual Reality
, Vol.
3
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{oberdorfer2022embodied,
title = {Embodied Gambling–Investigating the Influence of Level of Embodiment, Avatar Appearance, and Virtual Environment Design on an Online VR Slot Machine},
author = {Oberdörfer, Sebastian and Schraudt, David and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.828553},
doi = {10.3389/frvir.2022.828553}
}
Abstract: Slot machines are one of the most played games by players suffering from gambling disorder. New technologies like immersive Virtual Reality (VR) offer more possibilities to exploit erroneous beliefs in the context of gambling. Recent research indicates a higher risk potential when playing a slot machine in VR than on desktop. To continue this investigation, we evaluate the effects of providing different degrees of embodiment, i.e., minimal and full embodiment. The avatars used for the full embodiment further differ in their appearance, i.e., they elicit a high or a low socio-economic status. The virtual environment (VE) design can cause a potential influence on the overall gambling behavior. Thus, we also embed the slot machine in two different VEs that differ in their emotional design: a colorful underwater playground environment and a virtual counterpart of our lab. These design considerations resulted in four different versions of the same VR slot machine: 1) full embodiment with high socio-economic status, 2) full embodiment with low socio-economic status, 3) minimal embodiment playground VE, and 4) minimal embodiment laboratory VE. Both full embodiment versions also used the playground VE. We determine the risk potential by logging gambling frequency as well as stake size, and measuring harm-inducing factors, i.e., dissociation, urge to gamble, dark flow, and illusion of control, using questionnaires. Following a between groups experimental design, 82 participants played for 20 game rounds one of the four versions. We recruited our sample from the students enrolled at the University of Würzburg. Our safety protocol ensured that only participants without any recent gambling activity took part in the experiment. In this comparative user study, we found no effect of the embodiment nor VE design on neither the gambling frequency, stake sizes, nor risk potential. However, our results provide further support for the hypothesis of the higher visual angle on gambling stimuli and hence the increased emotional response being the true cause for the higher risk potential.
Erik Wolf, Marie Luisa Fiedler, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik,
Exploring Presence, Avatar Embodiment, and Body Perception with a Holographic Augmented Reality Mirror
, In
2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 350-359
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2022holographic,
title = {Exploring Presence, Avatar Embodiment, and Body Perception with a Holographic Augmented Reality Mirror},
author = {Wolf, Erik and Fiedler, Marie Luisa and Döllinger, Nina and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
year = {2022},
pages = {350-359},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-hololens-embodiment-preprint.pdf},
doi = {10.1109/VR51125.2022.00054}
}
Abstract: The embodiment of avatars in virtual reality (VR) is a promising tool for enhancing the user's mental health. A great example is the treatment of body image disturbances, where eliciting a full-body illusion can help identify, visualize, and modulate persisting misperceptions. Augmented reality (AR) could complement recent advances in the field by incorporating real elements, such as the therapist or the user's real body, into therapeutic scenarios. However, research on the use of AR in this context is very sparse. Therefore, we present a holographic AR mirror system based on an optical see-through (OST) device and markerless body tracking, collect valuable qualitative feedback regarding its user experience, and compare quantitative results regarding presence, embodiment, and body weight perception to similar systems using video see-through (VST) AR and VR. For our OST AR system, a total of 27 normal-weight female participants provided predominantly positive feedback on display properties (field of view, luminosity, and transparency of virtual objects), body tracking, and the perception of the avatar’s appearance and movements. In the quantitative comparison to the VST AR and VR systems, participants reported significantly lower feelings of presence, while they estimated the body weight of the generic avatar significantly higher when using our OST AR system. For virtual body ownership and agency, we found only partially significant differences. In summary, our study shows the general applicability of OST AR in the given context offering huge potential in future therapeutic scenarios. However, the comparative evaluation between OST AR, VST AR, and VR also revealed significant differences in relevant measures. Future work is mandatory to corroborate our findings and to classify the significance in a therapeutic context.
Sebastian Oberdörfer, Philipp Krop, Samantha Straka, Silke Grafe, Marc Erich Latoschik,
Fly My Little Dragon: Using AR to Learn Geometry
, In
Proceedings of the IEEE Conference on Games (CoG '22)
, pp. 528-531
.
IEEE
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2022little,
title = {Fly My Little Dragon: Using AR to Learn Geometry},
author = {Oberdörfer, Sebastian and Krop, Philipp and Straka, Samantha and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE Conference on Games (CoG '22)},
year = {2022},
pages = {528-531},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-fly-my-little-dragon-preprint.pdf},
doi = {10.1109/CoG51982.2022.9893601}
}
Abstract: We present the gamified AR learning environment ARES. During the gameplay, learners guide the little dragon called ”Ares” through dungeons. Using AR, ARES threedimensionally integrates the dungeons in the real-world. Each dungeon represents a coordinate system with obstacles that ultimately create mathematical exercises, e.g. rotating a door to let the dragon fly through it. Overcoming such a challenge requires a learner to spatially analyze the exercise and apply the fundamental mathematical principles. ARES displays a debriefing screen at the end of a dungeon to further support the learning process. In a preliminary qualitative user study, preservice and in-service teachers saw great potential in ARES for providing further practical and motivating exercises to deepen the knowledge in the classroom and at home.
Rebecca M. Hein, Marc Erich Latoschik, Carolin Wienrich,
Inter- and Transcultural Learning in Social Virtual Reality: A Proposal for an Inter- and Transcultural Virtual Object Database to be Used in the Implementation, Reflection, and Evaluation of Virtual Encounters
, In
Multimodal Technologies and Interaction
Lars Erik Holmquist Mark Billinghurst, Fotis Liarokapis, Mu-Chun Su (Eds.),
, Vol.
6
(
7)
, p. 50
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{mti6070050,
title = {Inter- and Transcultural Learning in Social Virtual Reality: A Proposal for an Inter- and Transcultural Virtual Object Database to be Used in the Implementation, Reflection, and Evaluation of Virtual Encounters},
author = {Hein, Rebecca M. and Latoschik, Marc Erich and Wienrich, Carolin},
editor = {Mark Billinghurst, Fotis Liarokapis, Lars Erik Holmquist and Su, Mu-Chun},
journal = {Multimodal Technologies and Interaction},
year = {2022},
volume = {6},
number = {7},
pages = {50},
url = {https://www.mdpi.com/2414-4088/6/7/50},
doi = {10.3390/mti6070050}
}
Abstract: Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation for inter-and transcultural encounters between English- and German-speaking learners. We used explicit and implicit measurement methods to identify cultural associations and the degree of stereotypical perception for each virtual stimuli (n = 293) through two online studies, including native German and English-speaking participants. The analysis resulted in a final well-describable database of 128 objects (called InteractionSuitcase). In future applications, the objects can be used as a great interaction or conversation asset and behavioral measurement tool in social VR applications, especially in the field of foreign language education. For example, encounters can use the objects to describe their culture, or teachers can intuitively assess stereotyped attitudes of the encounters.
Philipp Krop, Samantha Straka, Melanie Ullrich, Maximilian Ertl, Marc Erich Latoschik,
IT-Supported Request Management for Clinical Radiology: Contextual Design and Remote Prototype Testing
, In
CHI Conference on Human Factors in Computing Systems Extended Abstracts
(
45)
, pp. 1-8
.
New York, NY, USA
:
Association for Computing Machinery
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{krop2022a,
title = {IT-Supported Request Management for Clinical Radiology: Contextual Design and Remote Prototype Testing},
author = {Krop, Philipp and Straka, Samantha and Ullrich, Melanie and Ertl, Maximilian and Latoschik, Marc Erich},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts},
year = {2022},
number = {45},
pages = {1-8},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://dl.acm.org/doi/10.1145/3491101.3503571},
doi = {10.1145/3491101.3503571}
}
Abstract: Management of radiology requests in larger clinical contexts is characterized by a complex and distributed workflow. In our partner hospital, representing many similar clinics, these processes often still rely on exchanging physical papers and forms, making patient or case data challenging to access. This often leads to phone calls with long waiting queues, which are time-inefficient and result in frequent interrupts. We report on a user-centered design approach based on Rapid Contextual Design with an additional focus group to optimize and iteratively develop a new workflow. Participants found our prototypes fast and intuitive, the design clean and consistent, relevant information easy to access, and the request process fast and easy. Due to the COVID pandemic, we switched to remote prototype testing, which yielded equally good feedback and increased the participation rate. In the end, we propose best practices for remote prototype testing in hospitals with complex and distributed workflows.
Sophia C Steinhaeusser, Sebastian Oberdörfer, Sebastian von Mammen, Marc Erich Latoschik, Birgit Lugrin,
Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments
, In
Frontiers in Virtual Reality
, Vol.
3
, pp. 2673-4192
.
2022.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{Steinhaeusser:2022ab,
title = {Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments},
author = {Steinhaeusser, Sophia C and Oberdörfer, Sebastian and von Mammen, Sebastian and Latoschik, Marc Erich and Lugrin, Birgit},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
pages = {2673--4192},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.919163},
doi = {10.3389/frvir.2022.919163}
}
Sophia C Steinhaeusser, Sebastian Oberdörfer, Sebastian von Mammen, Marc Erich Latoschik, Birgit Lugrin,
Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments
, In
Frontiers in Virtual Reality
, Vol.
3
, p. 2673–4192
.
2022.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{steinhaeusser2022joyful,
title = {Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments},
author = {Steinhaeusser, Sophia C and Oberdörfer, Sebastian and von Mammen, Sebastian and Latoschik, Marc Erich and Lugrin, Birgit},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
pages = {2673–4192},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.919163},
doi = {10.3389/frvir.2022.919163}
}
Andreas Halbig, Sooraj K. Babu, Shirin Gatter, Marc Erich Latoschik, Kirsten Brukamp, Sebastian von Mammen,
Opportunities and Challenges of Virtual Reality in Healthcare -- A Domain Experts Inquiry
, In
Frontiers in Virtual Reality
, Vol.
3
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{Halbig:2022aa,
title = {Opportunities and Challenges of Virtual Reality in Healthcare -- A Domain Experts Inquiry},
author = {Halbig, Andreas and Babu, Sooraj K. and Gatter, Shirin and Latoschik, Marc Erich and Brukamp, Kirsten and von Mammen, Sebastian},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
url = {https://www.frontiersin.org/article/10.3389/frvir.2022.837616},
doi = {10.3389/frvir.2022.837616}
}
Abstract: In recent years, the applications and accessibility of Virtual Reality (VR) for the healthcare sector have continued to grow. However, so far, most VR applications are only relevant in research settings. Information about what healthcare professionals would need to independently integrate VR applications into their daily working routines is missing. The actual needs and concerns of the people who work in the healthcare sector are often disregarded in the development of VR applications, even though they are the ones who are supposed to use them in practice. By means of this study, we systematically involve health professionals in the development process of VR applications. In particular, we conducted an online survey with 102 healthcare professionals based on a video prototype which demonstrates a software platform that allows them to create and utilise VR experiences on their own. For this study, we adapted and extended the Technology Acceptance Model (TAM). The survey focused on the perceived usefulness and the ease of use of such a platform, as well as the attitude and ethical concerns the users might have. The results show a generally positive attitude toward such a software platform. The users can imagine various use cases in different health domains. However, the perceived usefulness is tied to the actual ease of use of the platform and sufficient support for learning and working with the platform. In the discussion, we explain how these results can be generalized to facilitate the integration of VR in healthcare practice.
Erik Wolf, David Mal, Viktor Frohnapfel, Nina Döllinger, Stephan Wenninger, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Plausibility and Perception of Personalized Virtual Humans between Virtual and Augmented Reality
, In
2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 489-498
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2022plausibility,
title = {Plausibility and Perception of Personalized Virtual Humans between Virtual and Augmented Reality},
author = {Wolf, Erik and Mal, David and Frohnapfel, Viktor and Döllinger, Nina and Wenninger, Stephan and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2022},
pages = {489-498},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-avatar_plausibility_and_perception_display_study-preprint.pdf},
doi = {10.1109/ISMAR55827.2022.00065}
}
Abstract: This article investigates the effects of different XR displays on the
perception and plausibility of personalized virtual humans. We compared immersive virtual reality (VR), video see-through augmented reality (VST AR), and optical see-through AR (OST AR). The personalized virtual alter egos were generated by state-of-the-art photogrammetry methods. 42 participants were repeatedly exposed to animated versions of their 3D-reconstructed virtual alter egos in each of the three XR display conditions. The reconstructed virtual alter egos were additionally modified in body weight for each repetition. We show that the display types lead to different degrees of incongruence between the renderings of the virtual humans and the presentation of the respective environmental backgrounds, leading to significant effects of perceived mismatches as part of a plausibility measurement. The device-related effects were further partly confirmed by subjective misestimations of the modified body weight and the measured spatial presence. Here, the exceedingly incongruent OST AR condition leads to the significantly highest weight misestimations as well as to the lowest perceived spatial presence. However, similar effects could not be confirmed for the affective appraisal (i.e., humanness, eeriness, or attractiveness) of the virtual humans, giving rise to the assumption that these factors might be unrelated to each other.
Chris Zimmerer, Philipp Krop, Martin Fischbach, Marc Erich Latoschik,
Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface
, In
CHI Conference on Human Factors in Computing Systems
.
New York, NY, USA
:
Association for Computing Machinery
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3491102.3502062,
title = {Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface},
author = {Zimmerer, Chris and Krop, Philipp and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {CHI Conference on Human Factors in Computing Systems},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://dl.acm.org/doi/10.1145/3491102.3502062},
doi = {10.1145/3491102.3502062}
}
Abstract: Multimodal Interfaces (MMIs) combining speech and spatial input have the potential to elicit minimal cognitive load. Low cognitive load increases effectiveness as well as user satisfaction and is regarded as an important aspect of intuitive use. While this potential has been extensively theorized in the research community, experiments that provide supporting observations based on functional interfaces are still scarce. In particular, there is a lack of studies comparing the commonly used Unimodal Interfaces (UMIs) with theoretically superior synergistic MMI alternatives. Yet, these studies are an essential prerequisite for generalizing results, developing practice-oriented guidelines, and ultimately exploiting the potential of MMIs in a broader range of applications. This work contributes a novel observation towards the resolution of this shortcoming in the context of the following combination of applied interaction techniques, tasks, application domain, and technology: We present a comprehensive evaluation of a synergistic speech & touch MMI and a touch-only menu-based UMI (interaction techniques) for selection and system control tasks in a digital tabletop game (application domain) on an interactive surface (technology). Cognitive load, user experience, and intuitive use are evaluated, with the former being assessed by means of the dual-task paradigm. Our experiment shows that the implemented MMI causes significantly less cognitive load and is perceived significantly more usable and intuitive than the UMI. Based on our results, we derive recommendations for the interface design of digital tabletop games on interactive surfaces. Further, we argue that our results and design recommendations are suitable to be generalized to other application domains on interactive surfaces for selection and system control tasks.
Carolin Wienrich, Lennart Fries, Marc Erich Latoschik,
Remote at Court – Challenges and Solutions of Video Conferencing in the Judicial System
, In
Proceedings 25th HCI International Conference
, Vol.
Design, Operation and Evaluation of Mobile Communications
, pp. 82-106
.
Springer
, 2022.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{wienrich2022remote,
title = {Remote at Court – Challenges and Solutions of Video Conferencing in the Judicial System},
author = {Wienrich, Carolin and Fries, Lennart and Latoschik, Marc Erich},
booktitle = {Proceedings 25th HCI International Conference},
year = {2022},
volume = {Design, Operation and Evaluation of Mobile Communications},
pages = {82-106},
publisher = {Springer},
url = {}
}
Nina Döllinger, Erik Wolf, David Mal, Stephan Wenninger, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Resize Me! Exploring the User Experience of Embodied Realistic Modulatable Avatars for Body Image Intervention in Virtual Reality
, In
Frontiers in Virtual Reality
, Vol.
3
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{dollinger2022resizeme,
title = {Resize Me! Exploring the User Experience of Embodied Realistic Modulatable Avatars for Body Image Intervention in Virtual Reality},
author = {Döllinger, Nina and Wolf, Erik and Mal, David and Wenninger, Stephan and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
journal = {Frontiers in Virtual Reality},
year = {2022},
volume = {3},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-frontiers-modulatable-avatars-body-image-intervention.pdf},
doi = {10.3389/frvir.2022.935449}
}
Abstract: Obesity is a serious disease that can affect both physical and psychological well-being. Due to weight stigmatization, many affected individuals suffer from body image disturbances whereby they perceive their body in a distorted way, evaluate it negatively, or neglect it. Beyond established interventions such as mirror exposure, recent advancements aim to complement body image treatments by the embodiment of visually altered virtual bodies in virtual reality (VR). We present a high-fidelity prototype of an advanced VR system that allows users to embody a rapidly generated personalized, photorealistic avatar and to realistically modulate its body weight in real-time within a carefully designed virtual environment. In a formative multi-method approach, a total of 12 participants rated the general user experience (UX) of our system during body scan and VR experience using semi-structured qualitative interviews and multiple quantitative UX measures. Using body weight modification tasks, we further compared three different interaction methods for real-time body weight modification and measured our system’s impact on the body image relevant measures body awareness and body weight perception. From the feedback received, demonstrating an already solid UX of our overall system and providing constructive input for further improvement, we derived a set of design guidelines to guide future development and evaluation processes of systems supporting body image interventions.
Sebastian Keppler, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik, Johann Habakuk Israel,
Self-Touch: An Immersive Interaction-Technique to Enhance Body Awareness
, In
i-com
, Vol.
21
(
3)
, pp. 329-337
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{KepplerDöllingerWienrichLatoschikIsrael+2022+329+337,
title = {Self-Touch: An Immersive Interaction-Technique to Enhance Body Awareness},
author = {Keppler, Sebastian and Döllinger, Nina and Wienrich, Carolin and Latoschik, Marc Erich and Israel, Johann Habakuk},
journal = {i-com},
year = {2022},
volume = {21},
number = {3},
pages = {329--337},
url = {https://doi.org/10.1515/icom-2022-0028},
doi = {doi:10.1515/icom-2022-0028}
}
Abstract: Physical well-being depends essentially on how the own body is perceived. A missing correspondence between the perception of one’s own body and reality can be distressing and eventually lead to mental illness. The touch of the own body is a multi-sensory experience to strengthen the feeling of the own body. We have developed an interaction technique that allows the self-touch of the own body in an immersive environment to support therapy procedures. Through additional visual feedback, we want to strengthen the feeling for the own body to achieve a sustainable effect in the own body perception. We conducted an expert evaluation to analyse the potential impact of our application and to localize and fix possible usability problems. The experts noted the ease of understanding and suitability of the interaction technique for increasing body awareness. However, the technical challenges such as stable and accurate body tracking were also mentioned. In addition, new ideas were given that would further support body awareness.
Sandra Birnstiel, Sebastian Oberdörfer, Marc Erich Latoschik,
Stay Safe! Safety Precautions for Walking on a Conventional Treadmill in VR
, In
Proceedings of the 29th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '22)
, pp. 732-733
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{birnstiel2022safety,
title = {Stay Safe! Safety Precautions for Walking on a Conventional Treadmill in VR},
author = {Birnstiel, Sandra and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 29th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '22)},
year = {2022},
pages = {732-733},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-stay-safe.pdf},
doi = {10.1109/VRW55335.2022.00217}
}
Abstract: Conventional treadmills are used in virtual reality (VR) applications, such as for rehabilitation training or gait studies. However, using the devices in VR poses risks of injury. Therefore, this study investigates safety precautions when using a conventional treadmill for a walking task. We designed a safety belt and displayed parts of the treadmill in VR. The safety belt was much appreciated by the participants and did not affect the walking behavior. However, the participants requested more visual cues in the user’s field of view.
Andrea Bartl, Christian Merz, Daniel Roth, Marc Erich Latoschik,
The Effects of Avatar and Environment Design on Embodiment, Presence, Activation, and Task Load in a Virtual Reality Exercise Application
, In
IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{bartl2022effects,
title = {The Effects of Avatar and Environment Design on Embodiment, Presence, Activation, and Task Load in a Virtual Reality Exercise Application},
author = {Bartl, Andrea and Merz, Christian and Roth, Daniel and Latoschik, Marc Erich},
booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2022},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-ilast-avatar-environment-design-vr-exercise-application.pdf}
}
Abstract: ABSTRACT
The development of embodied Virtual Reality (VR) systems involves multiple central design choices. These design choices affect the user perception and therefore require thorough consideration. This article reports on two user studies investigating the influence of common design choices on relevant intermediate factors (sense of embodiment, presence, motivation, activation, and task load) in a VR application for physical exercises. The first study manipulated the avatar fidelity (abstract, partial body vs. anthropomorphic, full-body) and the environment (with vs. without mirror). The second study manipulated the avatar type (healthy vs. injured) and the environment type (beach vs. hospital) and, hence, the avatar-environment congruence. The full-body avatar significantly increased the sense of embodiment and decreased mental demand. Interestingly, the mirror did not influence the dependent variables. The injured avatar significantly increased the temporal demand. The beach environment significantly reduced the tense activation. On the beach, participants felt more present in the incongruent condition embodying the injured avatar.
Rebecca Hein, Marc Erich Latoschik, Carolin Wienrich,
Usability and User Experience of Virtual Objects Supporting Learning and Communicating in Virtual Reality
, In
Mensch und Computer 2022 - Tagungsband
Bastian Pfleging, Kathrin Gerling, Sven Mayer (Eds.),
, pp. 510-514
.
New York
:
ACM
, 2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{mci/Hein2022,
title = {Usability and User Experience of Virtual Objects Supporting Learning and Communicating in Virtual Reality},
author = {Hein, Rebecca and Latoschik, Marc Erich and Wienrich, Carolin},
editor = {Pfleging, Bastian and Gerling, Kathrin and Mayer, Sven},
booktitle = {Mensch und Computer 2022 - Tagungsband},
year = {2022},
pages = {510-514},
publisher = {ACM},
address = {New York},
url = {https://dl.gi.de/handle/20.500.12116/39266},
doi = {10.1145/3543758.3547568}
}
Abstract: This study aims to evaluate the usability and user experience of the InteractionSuitecase. This is a collection of virtual objects that have cultural connotations (associated with German or Anglo-American culture). They are intended to promote communication about cultural differences and similarities in English lessons at German schools. They are thus part of a didactic concept for using social VR for trans- and intercultural learning. Future teacher used the virtual objects during a practical seminar and rated them as useful. Further they associates a positive user experience. Since the virtual objects should encourage communication, the sense of connectedness instead of isolation is a very important results. The results proved the readiness of the InteractionSuitcase for further pedagogical applications.
David Mal, Erik Wolf, Nina Döllinger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Virtual Human Coherence and Plausibility – Towards a Validated Scale
, In
2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 788-789
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{mal2022virtual,
title = {Virtual Human Coherence and Plausibility – Towards a Validated Scale},
author = {Mal, David and Wolf, Erik and Döllinger, Nina and Botsch, Mario and Wienrich, Carolin and Latoschik, Marc Erich},
booktitle = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2022},
pages = {788-789},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022_ieeevr_virtual_human_plausibility_preprint.pdf},
doi = {10.1109/VRW55335.2022.00245}
}
Abstract: Virtual humans contribute to users’ state of plausibility in various XR applications. We present the development and preliminary evaluation of a self-assessment questionnaire to quantify virtual human’s plausibility in virtual environments based on eleven concise items. A principal component analysis of 650 appraisals collected in an online survey revealed two highly reliable components within the items. We interpret the components as possible factors, i.e., appearance and behavior plausibility and match to the virtual environment, and propose future work aiming towards a standardized virtual human plausibility scale by validating the structure and sensitivity of both sub-components in XR environments.
Nina Döllinger, Erik Wolf, David Mal, Nico Erdmannsdörfer, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Virtual Reality for Mind and Body: Does the Sense of Embodiment Towards a Virtual Body Affect Physical Body Awareness?
, In
CHI 22 Conference on Human Factors in Computing Systems Extended Abstracts
, pp. 1-8
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2022virtual,
title = {Virtual Reality for Mind and Body: Does the Sense of Embodiment Towards a Virtual Body Affect Physical Body Awareness?},
author = {Döllinger, Nina and Wolf, Erik and Mal, David and Erdmannsdörfer, Nico and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin},
booktitle = {CHI 22 Conference on Human Factors in Computing Systems Extended Abstracts},
year = {2022},
pages = {1-8},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-chi-body-awareness-virtual-human-perception-preprint.pdf},
doi = {10.1145/3491101.3519613}
}
Abstract: Mind-body therapies aim to improve health by combining physical and mental exercises. Recent developments tend to incorporate virtual reality (VR) into their design and execution, but there is a lack of research concerning the inclusion of virtual bodies and their effect on body awareness in these designs.
In this study, 24 participants performed in-VR body awareness movement tasks in front of a virtual mirror while embodying a photorealistic, personalized avatar. Subsequently, they performed a heartbeat counting task and rated their perceived body awareness and sense of embodiment towards the avatar.
We found a significant relationship between sense of embodiment and self-reported body awareness but not between sense of embodiment and heartbeat counting.
Future work can build on these findings and further explore the relationship between avatar embodiment and body awareness.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Carolin Wienrich, Marc Erich Latoschik,
Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen
, In
MedienPädagogik Zeitschrift für Theorie und Praxis der Medienbildung
, Vol.
47
, pp. 246-266
.
2022.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{hein2021medienpaed,
title = {Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen},
author = {Hein, Rebecca and Steinbock, Jeanine and Eisenmann, Maria and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {MedienPädagogik Zeitschrift für Theorie und Praxis der Medienbildung},
year = {2022},
volume = {47},
pages = {246-266},
url = {https://www.researchgate.net/publication/359903684_Virtual_Reality_im_modernen_Englischunterricht_und_das_Potenzial_fur_Inter-_und_Transkulturelles_Lernen/references},
doi = {10.21240/mpaed/47/2022.04.12.X}
}
2021
Florian Kern, Matthias Popp, Peter Kullmann, Elisabeth Ganal, Marc Erich Latoschik,
3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus
, In
27th ACM Symposium on Virtual Reality Software and Technology
, pp. 1-3
.
Osaka, Japan
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kern2021printing,
title = {3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus},
author = {Kern, Florian and Popp, Matthias and Kullmann, Peter and Ganal, Elisabeth and Latoschik, Marc Erich},
booktitle = {27th ACM Symposium on Virtual Reality Software and Technology},
year = {2021},
pages = {1-3},
publisher = {Association for Computing Machinery},
address = {Osaka, Japan},
url = {https://doi.org/10.1145/3489849.3489949},
doi = {10.1145/3489849.3489949}
}
Abstract: This article introduces the accessory dock, a 3D printed multipurpose extension for consumer-grade XR controllers that enables flexible mounting of self-made and commercial accessories. The uniform design of our concept opens new opportunities for XR systems being used for more diverse purposes, e.g., researchers and practitioners could use and compare arbitrary XR controllers within their experiments while ensuring access to buttons and battery housing. As a first example, we present a stylus tip accessory to build an XR Stylus, which can be directly used with frameworks for handwriting, sketching, and UI interaction on physically aligned virtual surfaces. For new XR controllers, we provide instructions on how to adjust the accessory dock to the controller’s form factor. A video tutorial for the construction and the source files for 3D printing are publicly available for reuse, replication, and extension (https://go.uniwue.de/hci-otss-accessory-dock).
Desirée Weber, Stephan Hertweck, Hisham Alwanni, Lukas D. J. Fiederer, Xi Wang, Fabian Unruh, Martin Fischbach, Marc Erich Latoschik, Tonio Ball,
A Structured Approach to Test the Signal Quality of Electroencephalography Measurements During Use of Head-Mounted Displays for Virtual Reality Applications
, In
Frontiers in Neuroscience
, Vol.
15
, p. 1527
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{weber2021structured,
title = {A Structured Approach to Test the Signal Quality of Electroencephalography Measurements During Use of Head-Mounted Displays for Virtual Reality Applications},
author = {Weber, Desirée and Hertweck, Stephan and Alwanni, Hisham and Fiederer, Lukas D. J. and Wang, Xi and Unruh, Fabian and Fischbach, Martin and Latoschik, Marc Erich and Ball, Tonio},
journal = {Frontiers in Neuroscience},
year = {2021},
volume = {15},
pages = {1527},
url = {https://www.frontiersin.org/article/10.3389/fnins.2021.733673},
doi = {10.3389/fnins.2021.733673}
}
Abstract: Joint applications of virtual reality (VR) systems and electroencephalography (EEG) offer numerous new possibilities ranging from behavioral science to therapy. VR systems allow for highly controlled experimental environments, while EEG offers a non-invasive window to brain activity with a millisecond-ranged temporal resolution. However, EEG measurements are highly susceptible to electromagnetic (EM) noise and the influence of EM noise of head-mounted-displays (HMDs) on EEG signal quality has not been conclusively investigated. In this paper, we propose a structured approach to test HMDs for EM noise potentially harmful to EEG measures. The approach verifies the impact of HMDs on the frequency- and time-domain of the EEG signal recorded in healthy subjects. The verification task includes a comparison of conditions with and without an HMD during (i) an eyes-open vs. eyes-closed task, and (ii) with respect to the sensory- evoked brain activity. The approach is developed and tested to derive potential effects of two commercial HMDs, the Oculus Rift and the HTC Vive Pro, on the quality of 64-channel EEG measurements. The results show that the HMDs consistently introduce artifacts, especially at the line hum of 50 Hz and the HMD refresh rate of 90 Hz, respectively, and their harmonics. The frequency range that is typically most important in non-invasive EEG research and applications (<50 Hz) however, remained largely unaffected. Hence, our findings demonstrate that high-quality EEG recordings, at least in the frequency range up to 50 Hz, can be obtained with the two tested HMDs. However, the number of commercially available HMDs is constantly rising. We strongly suggest to thoroughly test such devices upfront since each HMD will most likely have its own EM footprint and this article provides a structured approach to implement such tests with arbitrary devices.
Rebecca Magdalena Hein, Carolin Wienrich, Marc Erich Latoschik,
A Systematic Review of Foreign Language Learning with Immersive Technologies (2001-2020)
, In
AIMS Electronics and Electrical Engineering
, Vol.
5
(
2)
, pp. 117-145
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{noauthororeditor,
title = {A Systematic Review of Foreign Language Learning with Immersive Technologies (2001-2020)},
author = {Hein, Rebecca Magdalena and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {AIMS Electronics and Electrical Engineering},
year = {2021},
volume = {5},
number = {2},
pages = {117-145},
url = {http://www.aimspress.com/article/doi/10.3934/electreng.2021007},
doi = {10.3934/electreng.2021007}
}
Abstract: This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.
Andreas Halbig, Marc Erich Latoschik,
A Systematic Review of Physiological Measurements, Factors, Methods, and Applications in Virtual Reality
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 89
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2021.694567,
title = {A Systematic Review of Physiological Measurements, Factors, Methods, and Applications in Virtual Reality},
author = {Halbig, Andreas and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {89},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-frotniers-review-physiological-measurements.pdf},
doi = {10.3389/frvir.2021.694567}
}
Abstract: Measurements of physiological parameters provide an objective, often non-intrusive, and (at least semi-)automatic evaluation and utilization of user behavior. In addition, specific hardware devices of Virtual Reality (VR) often ship with built-in sensors, i.e. eye-tracking and movements sensors. Hence, the combination of physiological measurements and VR applications seems promising. Several approaches have investigated the applicability and benefits of this combination for various fields of applications. However, the range of possible application fields, coupled with potentially useful and beneficial physiological parameters, types of sensor, target variables and factors, and analysis approaches and techniques is manifold. This article provides a systematic overview and an extensive state-of-the-art review of the usage of physiological measurements in VR. We identified 1,119 works that make use of physiological measurements in VR. Within these, we identified 32 approaches that focus on the classification of characteristics of experience, common in VR applications. The first part of this review categorizes the 1,119 works by field of application, i.e. therapy, training, entertainment, and communication and interaction, as well as by the specific target factors and variables measured by the physiological parameters. An additional category summarizes general VR approaches applicable to all specific fields of application since they target typical VR qualities. In the second part of this review, we analyze the target factors and variables regarding the respective methods used for an automatic analysis and, potentially, classification. For example, we highlight which measurement setups have been proven to be sensitive enough to distinguish different levels of arousal, valence, anxiety, stress, or cognitive workload in the virtual realm. This work may prove useful for all researchers wanting to use physiological data in VR and who want to have a good overview of prior approaches taken, their benefits and potential drawbacks.
Andrea Bartl, Stephan Wenninger, Erik Wolf, Mario Botsch, Marc Erich Latoschik,
Affordable But Not Cheap: A Case Study of the Effects of Two 3D-Reconstruction Methods of Virtual Humans
, In
Frontiers in Virtual Reality
, Vol.
2
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{bartl2021affordable,
title = {Affordable But Not Cheap: A Case Study of the Effects of Two 3D-Reconstruction Methods of Virtual Humans},
author = {Bartl, Andrea and Wenninger, Stephan and Wolf, Erik and Botsch, Mario and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.694617},
doi = {10.3389/frvir.2021.694617}
}
Abstract: Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others' appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material.
Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one's own body than for other virtual humans.
Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik,
Challenges and Opportunities of Immersive Technologies for Mindfulness Meditation: A Systematic Review
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 29
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2021.644683,
title = {Challenges and Opportunities of Immersive Technologies for Mindfulness Meditation: A Systematic Review},
author = {Döllinger, Nina and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {29},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.644683},
doi = {10.3389/frvir.2021.644683}
}
Abstract: Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Yann Glémarec, Jean-luc Lugrin, Anne-Gwenn Bosser, Cédric Buche, Marc Erich Latoschik,
Conference Talk Training With a Virtual Audience System
, In
ACM Symposium on Virtual Reality Software and Technology
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{noauthororeditor,
title = {Conference Talk Training With a Virtual Audience System},
author = {Glémarec, Yann and Lugrin, Jean-luc and Bosser, Anne-Gwenn and Buche, Cédric and Latoschik, Marc Erich},
journal = {ACM Symposium on Virtual Reality Software and Technology},
year = {2021},
url = {https://dl.acm.org/doi/10.1145/3489849.3489939},
doi = {https://doi.org/10.1145/3489849.3489939}
}
Abstract: This paper presents the first prototype of a virtual audience sys-tem (VAS) specifically designed as a training tool for conferencetalks. This system has been tailored for university seminars dedi-cated to the preparation and delivery of scientific talks. We describethe required features which have been identified during the de-velopment process. We also summarize the preliminary feedbackreceived from lecturers and students during the first deployment ofthe system in seminars for bachelor and doctoral students. Finally,we discuss future work and research directions. We believe oursystem architecture and features are providing interesting insightson the development and integration of VR-based educational toolsinto university curriculum.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Marc Erich Latoschik, Carolin Wienrich,
Development of the InteractionSuitcase in virtual reality to support inter- and transcultural learning processes in English as Foreign Language education
, In
DELFI 2021
Andrea Kienle, Andreas Harrer, Joerg M. Haake, Andreas Lingnau (Eds.),
, pp. 91-96
.
Bonn
:
Gesellschaft für Informatik e.V.
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{hein2021development,
title = {Development of the InteractionSuitcase in virtual reality to support inter- and transcultural learning processes in English as Foreign Language education},
author = {Hein, Rebecca and Steinbock, Jeanine and Eisenmann, Maria and Latoschik, Marc Erich and Wienrich, Carolin},
editor = {Kienle, Andrea and Harrer, Andreas and Haake, Joerg M. and Lingnau, Andreas},
booktitle = {DELFI 2021},
year = {2021},
pages = {91-96},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
url = {https://dl.gi.de/bitstream/handle/20.500.12116/36994/DELFI_2021_91-96.pdf?sequence=1&isAllowed=y}
}
Abstract: Immersion programs and the experiences they offer learners are irreplaceable. In times of Covid-19, social VR applications can offer enormous potential for the acquisition of inter- and transcultural competencies (ITC). Virtual objects (VO) could initiate communication and reflection processes between learners with different cultural backgrounds and therefore offer an exciting approach. Accordingly, we address the following research questions: (1) What is a sound way to collect objects for the InteractionSuitcase to promote ITC acquisition by means of Social VR? (2) For which aspects do students use the objects when developing an ITC learning scenario? (3) Which VO are considered particularly supportive to initiate and facilitate ITC learning? To answer these research questions, the virtual InteractionSuitcase will be designed and implemented. This paper presents the empirical preliminary work and interim results of the development and evaluation of the InteractionSuitcase, its usage, and the significance of this project for Human- Computer Interaction (HCI) and English as Foreign Language (EFL) research.
Octavia Madeira, Daniel Gromer, Marc Erich Latoschik, Paul Pauli,
Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze
, In
Frontiers in Virtual Reality
, Vol.
2
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{madeira:2021x,
title = {Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze},
author = {Madeira, Octavia and Gromer, Daniel and Latoschik, Marc Erich and Pauli, Paul},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.635048},
doi = {10.3389/frvir.2021.635048}
}
Abstract: The Elevated Plus-Maze (EPM) is a well-established apparatus to measure anxiety in rodents, i.e., animals exhibiting an increased relative time spent in the closed vs. the open arms are considered anxious. To examine whether such anxiety-modulated behaviors are conserved in humans, we re-translated this paradigm to a human setting using virtual reality in a Cave Automatic Virtual Environment (CAVE) system. In two studies, we examined whether the EPM exploration behavior of humans is modulated by their trait anxiety and also assessed the individuals' levels of acrophobia (fear of height), claustrophobia (fear of confined spaces), sensation seeking, and the reported anxiety when on the maze. First, we constructed an exact virtual copy of the animal EPM adjusted to human proportions. In analogy to animal EPM studies, participants (N = 30) freely explored the EPM for 5 min. In the second study (N = 61), we redesigned the EPM to make it more human-adapted and to differentiate influences of trait anxiety and acrophobia by introducing various floor textures and lower walls of closed arms to the height of standard handrails. In the first experiment, hierarchical regression analyses of exploration behavior revealed the expected association between open arm avoidance and Trait Anxiety, an even stronger association with acrophobic fear. In the second study, results revealed that acrophobia was associated with avoidance of open arms with mesh-floor texture, whereas for trait anxiety, claustrophobia, and sensation seeking, no effect was detected. Also, subjects' fear rating was moderated by all psychometrics but trait anxiety. In sum, both studies consistently indicate that humans show no general open arm avoidance analogous to rodents and that human EPM behavior is modulated strongest by acrophobic fear, whereas trait anxiety plays a subordinate role. Thus, we conclude that the criteria for cross-species validity are met insufficiently in this case. Despite the exploratory nature, our studies provide in-depth insights into human exploration behavior on the virtual EPM.
Octávia Madeira, Daniel Gromer, Marc Erich Latoschik, Paul Pauli,
Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 19
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{madeira2021effects,
title = {Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze},
author = {Madeira, Octávia and Gromer, Daniel and Latoschik, Marc Erich and Pauli, Paul},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {19},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.635048},
doi = {10.3389/frvir.2021.635048}
}
Abstract: The Elevated Plus-Maze (EPM) is a well-established apparatus to measure anxiety in rodents, i.e. animals exhibiting an increased relative time spent in the closed versus the open arms are considered anxious. To examine whether such anxiety-modulated behavior is conserved in humans, we re-translated this paradigm to a human setting using virtual reality in a Cave Automatic Virtual Environment (CAVE) system. In two studies, we examined whether the EPM exploration behavior of humans is modulated by their trait anxiety, but also assessed the individuals’ levels of acrophobia (fear of height), claustrophobia (fear of confined spaces), sensation seeking and the reported anxiety when on the maze. First, we constructed an exact virtual copy of the animal EPM adjusted to human proportions. In analogy to animal EPM studies, participants (N = 30) freely explored the EPM for five minutes. In the second study (N = 61), we redesigned the EPM to make it more human-adapted and to differentiate influences of trait anxiety and acrophobia by introducing various floor textures and lower walls of closed arms to the height of common handrails. In the first experiment, hierarchical regression analyses of exploration behavior revealed the expected association between open arm avoidance and Trait Anxiety, but an even stronger association with acrophobic fear. In the second study, results revealed that acrophobia was associated with avoidance of open arms with mesh-floor texture, whereas for trait anxiety, claustrophobia and sensation seeking no effect was detected. In addition, subjects’ fear rating was moderated by all psychometrics but trait anxiety. In sum, both studies consistently indicate that humans show no general open arm avoidance analogous to rodents and that human EPM behavior is modulated strongest by acrophobic fear, whereas trait anxiety plays a subordinate role. Thus, we conclude that the criteria for a cross-species validity are met insufficiently in this case. Despite of the exploratory nature, our studies provide in-depth insights into human exploration behavior on the virtual EPM.
Sebastian Oberdörfer, Samantha Straka, Marc Erich Latoschik,
Effects of Immersion and Visual Angle on Brand Placement Effectiveness
, In
Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21)
, pp. 440-441
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2021effects,
title = {Effects of Immersion and Visual Angle on Brand Placement Effectiveness},
author = {Oberdörfer, Sebastian and Straka, Samantha and Latoschik, Marc Erich},
booktitle = {Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21)},
year = {2021},
pages = {440-441},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-advrtize-poster-preprint.pdf},
doi = {10.1109/VRW52623.2021.00102}
}
Abstract: Typical inherent properties of immersive Virtual Reality (VR) such as felt presence might have an impact on how well brand placements are remembered. In this study, we exposed participants to brand placements in four conditions of varying degrees of immersion and visual angle on the stimulus. Placements appeared either as poster or as puzzle. We measured the recall and recognition of these placements. Our study revealed that neither immersion nor the visual angle had a significant impact on memory for brand placements.
Sebastian Oberdörfer, David Heidrich, Sandra Birnstiel, Marc Erich Latoschik,
Enchanted by Your Surrounding? Measuring the Effects of Immersion and Design of Virtual Environments on Decision-Making
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 101
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{oberdorfer2021enchanted,
title = {Enchanted by Your Surrounding? Measuring the Effects of Immersion and Design of Virtual Environments on Decision-Making},
author = {Oberdörfer, Sebastian and Heidrich, David and Birnstiel, Sandra and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {101},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2021.679277/full},
doi = {10.3389/frvir.2021.679277}
}
Abstract: Impaired decision-making leads to the inability to distinguish between advantageous and disadvantageous choices. The impairment of a person’s decision-making is a common goal of gambling games. Given the recent trend of gambling using immersive Virtual Reality it is crucial to investigate the effects of both immersion and the virtual environment (VE) on decision-making. In a novel user study, we measured decision-making using three virtual versions of the Iowa Gambling Task (IGT). The versions differed with regard to the degree of immersion and design of the virtual environment. While emotions affect decision-making, we further measured the positive and negative affect of participants. A higher visual angle on a stimulus leads to an increased emotional response. Thus, we kept the visual angle on the Iowa Gambling Task the same between our conditions. Our results revealed no significant impact of immersion or the VE on the IGT. We further found no significant difference between the conditions with regard to positive and negative affect. This suggests that neither the medium used nor the design of the VE causes an impairment of decision-making. However, in combination with a recent study, we provide first evidence that a higher visual angle on the IGT leads to an effect of impairment.
Carolin Wienrich, Marc Erich Latoschik,
eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 94
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{wienrich2021extended,
title = {eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research},
author = {Wienrich, Carolin and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {94},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.686783},
doi = {10.3389/frvir.2021.686783}
}
Abstract: Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article's contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development.
Kristina Foerster, Rebecca Hein, Silke Grafe, Marc Erich Latoschik, Carolin Wienrich,
Fostering Intercultural Competencies in Initial Teacher Education. Implementation of Educational Design Prototypes using a Social VR Environment
, In
Proceedings of Innovate Learning Summit 2020 2021
Theo Bastiaens (Ed.),
, pp. 73-86
.
Online, United States
:
Association for the Advancement of Computing in Education (AACE)
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{FoerHeinGraf2021ys,
title = {Fostering Intercultural Competencies in Initial Teacher Education. Implementation of Educational Design Prototypes using a Social VR Environment},
author = {Foerster, Kristina and Hein, Rebecca and Grafe, Silke and Latoschik, Marc Erich and Wienrich, Carolin},
editor = {Bastiaens, Theo},
booktitle = {Proceedings of Innovate Learning Summit 2020 2021},
year = {2021},
pages = {73--86},
publisher = {Association for the Advancement of Computing in Education (AACE)},
address = {Online, United States},
url = {https://www.learntechlib.org/pv/220276/}
}
Abstract: The combination of globalization and digitalization emphasizes the importance of media-related and intercultural competencies of teacher educators and pre-service teachers. This article reports on the initial prototypical implementation of a pedagogical concept to foster such competencies of pre-service teachers. The proposed pedagogical concept utilizes a social VR framework since related work on the characteristics of VR indicate that this medium is particularly well suited for intercultural professional development processes. The development is integrated into a larger design-based-research approach that develops a theory-guided and empirically grounded professional development concept for teacher educators with a special focus on TETC 8. TETC provide a suitable competence framework capable of aligning both requirements for media-related as well as intercultural competencies. In an exploratory study with student teachers we designed...
Thomas Schröter, Jennifer Tiede, Marc Erich Latoschik,
Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. A Case Study
, In
Proceedings of EdMedia + Innovate Learning
T. Bastiaens (Ed.),
, pp. 617-629
.
Association for the Advancement of Computing in Education (AACE)
, 2021.
[BibTeX]
[Download]
[BibSonomy]
@conference{schroter2021fostering,
title = {Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. A Case Study},
author = {Schröter, Thomas and Tiede, Jennifer and Latoschik, Marc Erich},
editor = {Bastiaens, T.},
booktitle = {Proceedings of EdMedia + Innovate Learning},
year = {2021},
pages = {617-629},
publisher = {Association for the Advancement of Computing in Education (AACE)},
url = {}
}
Thomas Schröter, Jennifer Tiede, Silke Grafe, Marc Erich Latoschik,
Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. Results from an Exploratory Study.
, In
Proceedings of Innovate Learning Summit 2021
Theo Bastiaens (Ed.),
, p. 160–170
.
Online, United States
:
Association for the Advancement of Computing in Education (AACE)
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{SchrTiedGraf2021yx,
title = {Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. Results from an Exploratory Study.},
author = {Schröter, Thomas and Tiede, Jennifer and Grafe, Silke and Latoschik, Marc Erich},
editor = {Bastiaens, Theo},
booktitle = {Proceedings of Innovate Learning Summit 2021},
year = {2021},
pages = {160–170},
publisher = {Association for the Advancement of Computing in Education (AACE)},
address = {Online, United States},
url = {}
}
Abstract: This exploratory study presents the findings and implications of a three-hour further development workshop implemented with a convenience sample of six teacher educators from a German university. The workshop aimed at fostering the Teacher Educator Technology Competencies (TETCs)—with a special focus on virtual reality (VR)—while reverting to the didactic principle of action orientation. To test whether this didactic principle is suitable for fostering the media pedagogical competencies of the target group in and with VR and to identify further design principles, a mixed-methods approach was utilized. The data collection consisted of focus group interviews, which were analyzed via qualitative content analysis, and Teacher Educator Technology Surveys (TETS). The analysis revealed that action orientation is a valuable addition to established didactical models in higher education (HE). Also...
Sebastian Oberdörfer, Anne Elsässer, Silke Grafe, Marc Erich Latoschik,
Grab the Frog: Comparing Intuitive Use and User Experience of a Smartphone-only, AR-only, and Tangible AR Learning Environment
, In
Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI '21)
.
New York, NY, USA
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2021comparing,
title = {Grab the Frog: Comparing Intuitive Use and User Experience of a Smartphone-only, AR-only, and Tangible AR Learning Environment},
author = {Oberdörfer, Sebastian and Elsässer, Anne and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI '21)},
year = {2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-mhci-frog-preprint.pdf},
doi = {10.1145/3447526.3472016}
}
Abstract: The integration of Augmented Reality (AR) in teaching concepts allows for the visualization of complex learning contents and can simultaneously enhance the learning motivation. By providing Tangible Augmented Reality (TAR), an AR learning environment receives a haptic aspect and allows for a direct manipulation of augmented learning materials. However, manipulating tangible objects while using handheld AR might reduce the intuitive use and hence user experience. Users need to simultaneously control the application and manipulate the tangible object. Therefore, we compare the differences in intuitive use and user experience evoked by varied technologies of knowledge presentation in an educational context. In particular, we compare a TAR learning environment targeting the learning of the anatomy of vertebrates to its smartphone-only and AR-only versions. The three versions of the learning environment only differ in their method of knowledge presentation. The three versions show a similar perceived intuitive use. The TAR version, however, yielded a significantly higher attractiveness and stimulation than AR-only and smartphone-only. This suggests a positive effect of TAR learning environments on the overall learning experience.
Carla Winter, Florian Kern, Dominik Gall, Marc Erich Latoschik, Paul Pauli, Ivo Käthner,
Immersive virtual reality during gait rehabilitation increases walking speed and motivation: A usability evaluation with healthy participants and patients with multiple sclerosis and stroke
, In
Journal of NeuroEngineering and Rehabilitation
, Vol.
18
(
68)
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{winter2021immersive,
title = {Immersive virtual reality during gait rehabilitation increases walking speed and motivation: A usability evaluation with healthy participants and patients with multiple sclerosis and stroke},
author = {Winter, Carla and Kern, Florian and Gall, Dominik and Latoschik, Marc Erich and Pauli, Paul and Käthner, Ivo},
journal = {Journal of NeuroEngineering and Rehabilitation},
year = {2021},
volume = {18},
number = {68},
url = {https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-021-00848-w},
doi = {https://doi.org/10.1186/s12984-021-00848-w}
}
Abstract: Background. The rehabilitation of gait disorders in multiple sclerosis (MS) and stroke patients is often based on conventional treadmill training. Virtual reality (VR)-based treadmill training can increase motivation and improve therapy outcomes.
Objective. The present study aimed at (1) demonstrating the feasibility and acceptance of an immersive virtual reality application (presented via head-mounted display, HMD) for gait rehabilitation with patients, and (2) compare its effects to a semi-immersive presentation (via a monitor) and a conventional treadmill training without VR.
Methods and results. 36 healthy participants and 14 persons with MS or stroke participated in each of the three experimental conditions. For both groups, the walking speed in the HMD condition was higher than in treadmill training without VR. Healthy participants reported a higher motivation after the HMD condition as compared with the other conditions. Importantly, no side effects in the sense of simulator sickness occurred and usability ratings were high. Most of the healthy study participants (89 %) and patients (71 %) preferred the HMD-based training among the three conditions and most patients could imagine using it more frequently.
Conclusion. The study demonstrated the feasibility of combining a treadmill training with immersive VR. Due to its high usability and low side effects, the immersive system could serve as a valid alternative to conventional treadmill training in gait rehabilitation. It might be particularly suited for patients to improve training motivation and training outcome e. g. the walking speed compared with treadmill training using no or only semi-immersive VR.
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
Impostor-Based Rendering Acceleration for Virtual, Augmented, and Mixed Reality
, In
Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology
.
New York, NY, USA
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{misiak2021impostorbased,
title = {Impostor-Based Rendering Acceleration for Virtual, Augmented, and Mixed Reality},
author = {Mišiak, Martin and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology},
year = {2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3489849.3489865},
doi = {10.1145/3489849.3489865}
}
Abstract: This paper presents an image-based rendering approach to accelerate rendering time of virtual scenes containing a large number of complex high poly count objects. Our approach replaces complex objects by impostors, light-weight image-based representations leveraging geometry and shading related processing costs. In contrast to their classical implementation, our impostors are specifically designed to work in Virtual-, Augmented- and Mixed Reality scenarios (XR for short), as they support stereoscopic rendering to provide correct depth perception. Motion parallax of typical head movements is compensated by using a ray marched parallax correction step. Our approach provides a dynamic run-time recreation of impostors as necessary for larger changes in view position. The dynamic run-time recreation is decoupled from the actual rendering process. Hence, its associated processing cost is therefore distributed over multiple frames. This avoids any unwanted frame drops or latency spikes even for impostors of objects with complex geometry and many polygons. In addition to the significant performance benefit, our impostors compare favorably against the original mesh representation, as geometric and textural temporal aliasing artifacts are heavily suppressed.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Aryana Collins Jackson, Cédric Buche, Marc Erich Latoschik,
Indifferent or Enthusiastic? Virtual Audiences Animation and Perception in Virtual Reality
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 72
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2021.666232,
title = {Indifferent or Enthusiastic? Virtual Audiences Animation and Perception in Virtual Reality},
author = {Glémarec, Yann and Lugrin, Jean-Luc and Bosser, Anne-Gwenn and Collins Jackson, Aryana and Buche, Cédric and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {72},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.666232},
doi = {10.3389/frvir.2021.666232}
}
Abstract: In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications.
Rebecca M Hein, Jeanine Steinbock, Maria Eisenmann, Carolin Wienrich, Marc Erich Latoschik,
Inter- und Transkulturelles Lernen in Virtual Reality - Ein Seminarkonzept für die Lehrkräfteausbildung im Fach Englisch
, In
In: Söbke, H. & Weise, M. (Hrsg.), Wettbewerbsband AVRiL 2021
, pp. 34-39
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{hein2021inter,
title = {Inter- und Transkulturelles Lernen in Virtual Reality - Ein Seminarkonzept für die Lehrkräfteausbildung im Fach Englisch},
author = {Hein, Rebecca M and Steinbock, Jeanine and Eisenmann, Maria and Wienrich, Carolin and Latoschik, Marc Erich},
journal = {In: Söbke, H. & Weise, M. (Hrsg.), Wettbewerbsband AVRiL 2021},
year = {2021},
pages = {34-39},
url = {https://dl.gi.de/handle/20.500.12116/37439},
doi = {10.18420/avril2021_05}
}
Abstract: Inter- und transkulturelle Kompetenzen sind zentrale Elemente einer aktiven Teilhabe in einer modernen Gesellschaft. Die Fremdsprachenforschung setzt sich im Kontext der Digitalisierung mit neuen Konzepten zum Erwerb dieser Kompetenzen im Fremdsprachenunterricht auseinander. Eine Zusammenarbeit mit der HCI-Forschung zum Einsatz von social VR im Klassenzimmer lässt auf einen positiven Erkenntnisgewinn schließen. So lautet die zentrale Forschungshypothese, dass vollimmersive Lernumgebungen verstärkt inter- und transkulturelle Lernprozesse fördern, weil Lernende VR-Welten nicht nur betrachten, sondern innerhalb dieser selbstwirksam interagieren und das Umfeld eigenständig manipulieren können. Darauf aufbauend wurde ein Seminarkonzept entwickelt, in dem Lehramtsstudierende für das Fach Englisch inter- und transkulturelle Lehr/Lernszenarien in VR konzipieren, die sie dann in ihrer Unterrichtspraxis einsetzen können. So lernen Studierende das Potenzial von VR nicht nur kennen, sondern ebenso dieses unterrichtlich umzusetzen.
Philipp Krop, Samantha Straka, Melanie Ullrich, Maximilian Ertl, Marc Erich Latoschik,
IT-Supported request management for clinical radiology: Analyzing the Radiological Order Workflow through Contextual Interviews
, In
Mensch und Computer 2021 (MuC '21), September 5-8, 2021, Ingolstadt, Germany
, pp. 1-7
.
New York, NY, USA
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kropstraka21,
title = {IT-Supported request management for clinical radiology: Analyzing the Radiological Order Workflow through Contextual Interviews},
author = {Krop, Philipp and Straka, Samantha and Ullrich, Melanie and Ertl, Maximilian and Latoschik, Marc Erich},
booktitle = {Mensch und Computer 2021 (MuC '21), September 5-8, 2021, Ingolstadt, Germany},
year = {2021},
pages = {1-7},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://dl.acm.org/doi/pdf/10.1145/3473856.3473992},
doi = {10.1145/3473856.3473992}
}
Abstract: Requests for radiological examinations in large medical facilities are a distributed and complex process with potential health-related risks for patients. A user-centered qualitative analysis with contextual interviews uncovered nine core problems, which hinder work efficiency and patient care: (1) Difficulties to access patient data & requests, (2) the large number of phone calls, (3) restricted & abused access rights, (4) request status difficult to track, (5) paper notes used for patient data, (6) lack of assistance for data entry, (7) frustration through documentation, (8) IT-systems not self-explanatory, and (9) conflict between physicians and radiologists.Contextual interviews were found to be a well fitting method to analyze and understand this complex process with multiple user roles. This analysis showed that there is room for improvement in the underlying IT systems, workflows and infrastructure. Our data gave useful insight into solutions to these problems and how we can use technology to improve all aspects of the request management. We are currently addressing those issues with a user-centered design process to design and implement a mobile application, which we will present in future work.
Jan-Philipp Stauffert, Kristof Korwisi, Florian Niebling, Marc Erich Latoschik,
Ka-Boom!!! Visually Exploring Latency Measurements for XR
, In
Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
.
New York, NY, USA
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{stauffert_kaboom_2021,
title = {Ka-Boom!!! Visually Exploring Latency Measurements for XR},
author = {Stauffert, Jan-Philipp and Korwisi, Kristof and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
year = {2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-acmchi-latency-comic-preprint.pdf},
doi = {10.1145/3411763.3450379}
}
Abstract: Latency can be detrimental for the experience of Virtual Reality. High latency can lead to loss of performance and cybersickness. There are simple approaches to measure approximate latency and more elaborated for more insight into latency behavior. Yet there are still researchers who do not measure the latency of the system they are using to conduct VR experiments.
This paper provides an illustrated overview of different approaches to measure latency of VR applications, as well as a small decision-making guide to assist in the choice of the measurement method. The visual style offers a more approachable way to understand how to measure latency.
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Mapping pre-service teachers' TPACK development using a social virtual reality and a video-conferencing system
, In
Proceedings of Innovate Learning Summit 2021
T. Bastiaens (Ed.),
, pp. 145-159
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{gabrielaripka,
title = {Mapping pre-service teachers' TPACK development using a social virtual reality and a video-conferencing system},
author = {Ripka, Gabriela and Grafe, Silke and Latoschik, Marc Erich},
editor = {Bastiaens, T.},
booktitle = {Proceedings of Innovate Learning Summit 2021},
year = {2021},
pages = {145-159},
url = {https://www.learntechlib.org/p/220280/}
}
Abstract: Social VR's characteristics, by offering authentic learning environments that enable interaction remotely and synchronously and permit learning experiences that affect learners in a multi-sensory way, offer great potential for teaching and learning processes. However, concerning its use to promote pre-service teachers' TPACK in initial teacher education, there remains a research desideratum. In this context, this exploratory study addressed the following research question: How did pre-service teachers' TPACK develop using a social VR learning environment prototype in comparison to a video conferencing platform throughout a semester? Following a design-based research approach, an action-oriented pedagogical concept for teaching and learning in social VR was designed and implemented for initial teacher education at a German university with a convenience sample of 14 participants. The lesson plans were collected and analyzed with the help of Epistemic Network Analysis (Shaffer, 2017) at three points of time during the semester and the GATI reflection process (Krauskopf et al., 2018). Further, 14 GATI diagrams gave insights into pre-service teachers' self-estimated TPACK. As the results indicate, pre-service students constructed more complex mental models of TPACK in social VR compared to the video conferencing platform, indicating that more interrelations between knowledge domains could be constructed by planning and designing VR-integrated lesson plans.
Sebastian Oberdörfer, David Heidrich, Sandra Birnstiel, Marc Erich Latoschik,
Measuring the Effects of Virtual Environment Design on Decision-Making
, In
Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21)
, pp. 442-443
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2021measuring,
title = {Measuring the Effects of Virtual Environment Design on Decision-Making},
author = {Oberdörfer, Sebastian and Heidrich, David and Birnstiel, Sandra and Latoschik, Marc Erich},
booktitle = {Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21)},
year = {2021},
pages = {442-443},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-iowa-gambling-task-2-preprint.pdf},
doi = {10.1109/VRW52623.2021.00103}
}
Abstract: Recent research indicates an impairment in decision-making in immersive Virtual Reality (VR) when completing the Iowa Gambling Task (IGT). There is a high potential for emotions to explain the IGT decision-making behavior. The design of a virtual environment (VE) can influence a user’s mood and hence potentially the decisionmaking. In a novel user study, we measure decision-making using three virtual versions of the IGT. The versions differ with regard to the degree of immersion and design of the VE. Our results revealed no significant impact of the VE on the IGT and hence on decision-making.
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik, Silke Grafe,
Mutual Benefits: Interdisciplinary Education of Pre-Service Teachers and HCI Students in VR/AR Learning Environment Design
, In
Frontiers in Education
, Vol.
6
, p. 233
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{oberdorfer2021mutual,
title = {Mutual Benefits: Interdisciplinary Education of Pre-Service Teachers and HCI Students in VR/AR Learning Environment Design},
author = {Oberdörfer, Sebastian and Birnstiel, Sandra and Latoschik, Marc Erich and Grafe, Silke},
journal = {Frontiers in Education},
year = {2021},
volume = {6},
pages = {233},
url = {https://www.frontiersin.org/articles/10.3389/feduc.2021.693012/full},
doi = {10.3389/feduc.2021.693012}
}
Abstract: The successful development and classroom integration of Virtual (VR) and Augmented Reality (AR) learning environments requires competencies and content knowledge with respect to media didactics and the respective technologies. The paper discusses a pedagogical concept specifically aiming at the interdisciplinary education of pre-service teachers in collaboration with human-computer interaction students. The students’ overarching goal is the interdisciplinary realization and integration of VR/AR learning environments in teaching and learning concepts. To assist this approach, we developed a specific tutorial guiding the developmental process. We evaluate and validate the effectiveness of the overall pedagogical concept by analyzing the change in attitudes regarding 1) the use of VR/AR for educational purposes and in competencies and content knowledge regarding 2) media didactics and 3) technology. Our results indicate a significant improvement in the knowledge of media didactics and technology. We further report on four STEM learning environments that have been developed during the seminar.
Florian Kern, Peter Kullmann, Elisabeth Ganal, Kristof Korwisi, Rene Stingl, Florian Niebling, Marc Erich Latoschik,
Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces
, In
Frontiers in Virtual Reality
Daniel Zielasko (Ed.),
, Vol.
2
, p. 69
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kern2021offtheshelf,
title = {Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces},
author = {Kern, Florian and Kullmann, Peter and Ganal, Elisabeth and Korwisi, Kristof and Stingl, Rene and Niebling, Florian and Latoschik, Marc Erich},
editor = {Zielasko, Daniel},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {69},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2021.684498},
doi = {10.3389/frvir.2021.684498}
}
Abstract: This article introduces the Off-The-Shelf Stylus (OTSS), a framework for 2D interaction (in 3D) as well as for handwriting and sketching with digital pen, ink, and paper on physically aligned virtual surfaces in Virtual, Augmented, and Mixed Reality (VR, AR, MR: XR for short). OTSS supports self-made XR styluses based on consumer-grade six-degrees-of-freedom XR controllers and commercially available styluses. The framework provides separate modules for three basic but vital features: 1) The stylus module provides stylus construction and calibration features. 2) The surface module provides surface calibration and visual feedback features for virtual-physical 2D surface alignment using our so-called 3ViSuAl procedure, and surface interaction features. 3) The evaluation suite provides a comprehensive test bed combining technical measurements for precision, accuracy, and latency with extensive usability evaluations including handwriting and sketching tasks based on established visuomotor, graphomotor, and handwriting research. The framework’s development is accompanied by an extensive open source reference implementation targeting the Unity game engine using an Oculus Rift S headset and Oculus Touch controllers. The development compares three low-cost and low-tech options to equip controllers with a tip and includes a web browser-based surface providing support for interacting, handwriting, and sketching. The evaluation of the reference implementation based on the OTSS framework identified an average stylus precision of 0.98 mm (SD = 0.54 mm) and an average surface accuracy of 0.60 mm (SD = 0.32 mm) in a seated VR environment. The time for displaying the stylus movement as digital ink on the web browser surface in VR was 79.40 ms on average (SD = 23.26 ms), including the physical controller’s motion-to-photon latency visualized by its virtual representation (M = 42.57 ms, SD = 15.70 ms). The usability evaluation (N = 10) revealed a low task load, high usability, and high user experience. Participants successfully reproduced given shapes and created legible handwriting, indicating that the OTSS and it’s reference implementation is ready for everyday use. We provide source code access to our implementation, including stylus and surface calibration and surface interaction features, making it easy to reuse, extend, adapt and/or replicate previous results (https://go.uniwue.de/hci-otss).
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Peer group supervision in Zoom and social VR-Preparing preservice teachers for planning and designing digital media integrated classes
, In
EdMedia+ Innovate Learning
, pp. 602-616
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{ripka2021peer,
title = {Peer group supervision in Zoom and social VR-Preparing preservice teachers for planning and designing digital media integrated classes},
author = {Ripka, Gabriela and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {EdMedia+ Innovate Learning},
year = {2021},
pages = {602-616},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-edmedia-peer-group-supervision-in-zoom-and-social-vr.pdf}
}
Abstract: 21-century challenges demand a change towards collaborative and constructive seminar designs in initial teacher education regarding preservice teachers acquiring meta-conceptual awareness (TPACK) about how to implement emerging technologies in their future profession. Against this background the paper addresses the following research questions: 1) How should a pedagogical concept for remote initial teacher education be designed to promote metacognitive learning processes of preservice teachers? 2) How do preservice teachers perceive these learning processes in video-based communication and social VR? Regarding the pedagogical concept, peer group supervision and an action-and development-oriented approach using Zoom and social VR were identified as relevant for an instructional design that provides collaborative and constructive learning processes for students. In this exploratory study, 17 students participated in two iterative cycles of peer group supervision performing design tasks in groups. A content analysis of reflective video statements and qualitative group interviews was carried out using a qualitative research design. Results indicate the successful implementation of peer group supervision. Regarding media's implementation, Zoom's screen-sharing option and breakout session benefitted the consultation process as well as social VR's "realistic" experience of creating a "sense of community".
David Fernes, Sebastian Oberdörfer, Marc Erich Latoschik,
Recreating a Medieval Mill as a Virtual Learning Environment
, In
Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST '21)
.
New York, NY, USA
:
Association for Computing Machinery
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{fernes2021recreating,
title = {Recreating a Medieval Mill as a Virtual Learning Environment},
author = {Fernes, David and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST '21)},
year = {2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-vrst-medieval-mill-preprint.pdf},
doi = {10.1145/3489849.3489899}
}
Abstract: Historic buildings shown in open-air museums often lack a good accessibility and visitors rarely can interact with them as well as displayed tools to learn about processes. Providing these buildings in Virtual Reality could be a great supplement for museums to provide accessible and interactive offers. To investigate the effectiveness of this approach and to derive design guidelines, we developed an interactive virtual replicate of a medieval mill. We present the design of the mill and the results of a preliminary usability evaluation.
Andrea Bartl, Sungchul Jung, Peter Kullmann, Stephan Wenninger, Jascha Achenbach, Erik Wolf, Christian Schell, Robert W. Lindeman, Mario Botsch, Marc Erich Latoschik,
Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match
, In
2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 565-566
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{bartl2021selfavatars,
title = {Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match},
author = {Bartl, Andrea and Jung, Sungchul and Kullmann, Peter and Wenninger, Stephan and Achenbach, Jascha and Wolf, Erik and Schell, Christian and Lindeman, Robert W. and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2021},
pages = {565-566},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-deliberateness-contextmatch-poster-preprint.pdf},
doi = {10.1109/VRW52623.2021.00165}
}
Abstract: The illusion of virtual body ownership (VBO) plays a critical role in virtual reality (VR). VR applications provide a broad design space which includes contextual aspects of the virtual surroundings as well as user-driven deliberate choices of their appearance in VR potentially influencing VBO and other well-known effects of VR. We propose a protocol for an experiment to investigate the influence of deliberateness and context-match on VBO and presence. In a first study, we found significant interactions with the environment. Based on our results we derive recommendations for future experiments.
Yanyan Qi, Dorothée Bruch, Philipp Krop, Martin J Herrmann, Marc Erich Latoschik, Jürgen Deckert, Grit Hein,
Social buffering of human fear is shaped by gender, social concern and the presence of real vs virtual agents
, In
Translational psychiatry
, Vol.
11
(
1)
, p. 1–10
.
Nature Publishing Group
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{qi2021social,
title = {Social buffering of human fear is shaped by gender, social concern and the presence of real vs virtual agents},
author = {Qi, Yanyan and Bruch, Dorothée and Krop, Philipp and Herrmann, Martin J and Latoschik, Marc Erich and Deckert, Jürgen and Hein, Grit},
journal = {Translational psychiatry},
year = {2021},
volume = {11},
number = {1},
pages = {1–10},
publisher = {Nature Publishing Group},
url = {https://psyarxiv.com/qx6jg/download?format=pdf}
}
Abstract: The presence of a partner can attenuate physiological fear responses, a phenomenon known as social buffering. However, not all individuals are equally sociable. Here we investigated whether social buffering of fear is shaped by sensitivity to social anxiety (social concern) and whether these effects are different in females and males. We collected skin conductance responses (SCRs) and affect ratings of female and male participants when they experienced aversive and neutral sounds alone (alone treatment) or in the presence of an unknown person of the same gender (social treatment). Individual differences in social concern were assessed based on a well-established questionnaire. Our results showed that social concern had a stronger effect on social buffering in females than in males. The lower females scored on social concern, the stronger the SCRs reduction in the social compared to the alone treatment. The effect of social concern on social buffering of fear in females disappeared if participants were paired with a virtual agent instead of a real person. Together, these results showed that social buffering of human fear is shaped by
gender and social concern. In females, the presence of virtual agents can buffer fear, irrespective of individual differences in social concern. These findings specify factors that shape the social modulation of human fear, and thus might be relevant for the treatment of anxiety disorders.
Carolin Wienrich, Philipp Komma, Stephanie Vogt, Marc Erich Latoschik,
Spatial Presence in Mixed Realities--Considerations about the Concept, Measures, Design, and Experiments
, In
Frontiers in Virtual Reality
Richard Skarbez (Ed.),
, Vol.
2
, p. 141
.
Frontiers
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{wienrichspatial,
title = {Spatial Presence in Mixed Realities--Considerations about the Concept, Measures, Design, and Experiments},
author = {Wienrich, Carolin and Komma, Philipp and Vogt, Stephanie and Latoschik, Marc Erich},
editor = {Skarbez, Richard},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {141},
publisher = {Frontiers},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.694315},
doi = {10.3389/frvir.2021.694315}
}
Abstract: Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions.
Erik Wolf, Nathalie Merdan, Nina Döllinger, David Mal, Carolin Wienrich, Mario Botsch, Marc Erich Latoschik,
The Embodiment of Photorealistic Avatars Influences Female Body Weight Perception in Virtual Reality
, In
2021 IEEE Virtual Reality and 3D User Interfaces (VR)
, pp. 65-74
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2021embodiment,
title = {The Embodiment of Photorealistic Avatars Influences Female Body Weight Perception in Virtual Reality},
author = {Wolf, Erik and Merdan, Nathalie and Döllinger, Nina and Mal, David and Wienrich, Carolin and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {2021 IEEE Virtual Reality and 3D User Interfaces (VR)},
year = {2021},
pages = {65-74},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-body_weight_perception_embodiment-preprint.pdf},
doi = {10.1109/VR50410.2021.00027}
}
Abstract: Embodiment and body perception have become important research topics in the field of virtual reality (VR). VR is considered a particularly promising tool to support research and therapy in regard to distorted body weight perception. However, the influence of embodiment on body weight perception has yet to be clarified. To address this gap, we compared body weight perception of 56 female participants of normal weight using a VR application. They either (a) self-embodied a photorealistic, non-personalized virtual human and performed body movements in front of a virtual mirror or (b) only observed the virtual human as other's avatar (or agent) performing the same movements in front of them. Afterward, participants had to estimate the virtual human's body weight. Additionally, we considered the influence of the participants' body mass index (BMI) on the estimations and captured the participants' feelings of presence and embodiment. Participants estimated the body weight of the virtual human as their embodied self-avatars significantly lower compared to participants rating the virtual human as other's avatar. Furthermore, the estimations of body weight were significantly predicted by the participant's BMI with embodiment, but not without. Our results clearly highlight embodiment as an important factor influencing the perception of virtual humans' body weights in VR.
Negin Hamzeheinejad, Daniel Roth, Samantha Monty, Julian Breuer, Anuschka Rodenberg, Marc Erich Latoschik,
The Impact of Implicit and Explicit Feedback on Performance and Experience during VR-Supported Motor Rehabilitation
, In
2021 IEEE Virtual Reality and 3D User Interfaces (VR)
, pp. 382-391
.
IEEE
, 2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{hamzeheinejad2021impact,
title = {The Impact of Implicit and Explicit Feedback on Performance and Experience during VR-Supported Motor Rehabilitation},
author = {Hamzeheinejad, Negin and Roth, Daniel and Monty, Samantha and Breuer, Julian and Rodenberg, Anuschka and Latoschik, Marc Erich},
booktitle = {2021 IEEE Virtual Reality and 3D User Interfaces (VR)},
year = {2021},
pages = {382-391},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-vrgait-feedback-preprint.pdf},
doi = {10.1109/VR50410.2021.00061}
}
Abstract: This paper examines the impact of implicit and explicit feedback in Virtual Reality (VR) on performance and user experience during motor rehabilitation. In this work, explicit feedback consists of visual and auditory cues provided by a virtual trainer, compared to traditional feedback provided by a real physiotherapist. Implicit feedback was generated by the walking motion of the virtual trainer accompanying the patient during virtual walks. Here, the potential synchrony of movements between the trainer and trainee is intended to create an implicit visual affordance of motion adaption. We hypothesize that this will stimulate the activation of mirror neurons, thus fostering neuroadaptive processes. We conducted a clinical user study in a rehabilitation center employing a gait robot. We investigated the performance outcome and subjective experience of four resulting VR-supported rehabilitation conditions: with/without explicit feedback, and with/without implicit (synchronous motion) stimulation by a virtual trainer. We further included two baseline conditions reflecting the current NonVR procedure in the rehabilitation center. Our results show that additional feedback generally resulted in better patient performance, objectively assessed by the necessary applied support force of the robot. Additionally, our VR supported rehabilitation procedure improved enjoyment and satisfaction, while no negative impacts could be observed. Implicit feedback and adapted motion synchrony by the virtual trainer led to higher mental demand, giving rise to hopes of increased neural activity and neuroadaptive stimulation.
Fabian Unruh, Maximilian Landeck, Sebastian Oberdörfer, Jean-Luc Lugrin, Marc Erich Latoschik,
The Influence of Avatar Embodiment on Time Perception - Towards VR for Time-Based Therapy
, In
Frontiers in Virtual Reality
, Vol.
2
, p. 71
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2021.658509,
title = {The Influence of Avatar Embodiment on Time Perception - Towards VR for Time-Based Therapy},
author = {Unruh, Fabian and Landeck, Maximilian and Oberdörfer, Sebastian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2021},
volume = {2},
pages = {71},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.658509},
doi = {10.3389/frvir.2021.658509}
}
Abstract: Psycho-pathological conditions, such as depression or schizophrenia, are often accompanied by a distorted perception of time. People suffering from this conditions often report that the passage of time slows down considerably and that they are ``stuck in time.'' Virtual Reality (VR) could potentially help to diagnose and maybe treat such mental conditions. However, the conditions in which a VR simulation could correctly diagnose a time perception deviation are still unknown. In this paper, we present an experiment investigating the difference in time experience with and without a virtual body in VR, also known as avatar. The process of substituting a person's body with a virtual body is called avatar embodiment. Numerous studies demonstrated interesting perceptual, emotional, behavioral, and psychological effects caused by avatar embodiment. However, the relations between time perception and avatar embodiment are still unclear. Whether or not the presence or absence of an avatar is already influencing time perception is still open to question. Therefore, we conducted a between-subjects design with and without avatar embodiment as well as a real condition (avatar vs. no-avatar vs. real). A group of 105 healthy subjects had to wait for seven and a half minutes in a room without any distractors (e.g., no window, magazine, people, decoration) or time indicators (e.g., clocks, sunlight). The virtual environment replicates the real physical environment. Participants were unaware that they will be asked to estimate their waiting time duration as well as describing their experience of the passage of time at a later stage. Our main finding shows that the presence of an avatar is leading to a significantly faster perceived passage of time. It seems to be promising to integrate avatar embodiment in future VR time-based therapy applications as they potentially could modulate a user's perception of the passage of time. We also found no significant difference in time perception between the real and the VR conditions (avatar, no-avatar), but further research is needed to better understand this outcome.
Martina Mara, Jan-Philipp Stein, Marc Erich Latoschik, Birgit Lugrin, Constanze Schreiner, Rafael Hostettler, Markus Appel,
User Responses to a Humanoid Robot Observed in Real Life, Virtual Reality, 3D and 2D
, In
Frontiers in Psychology - Human-Media Interaction
and (Ed.),
, Vol.
12
, p. 1152
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{mara2021responses,
title = {User Responses to a Humanoid Robot Observed in Real Life, Virtual Reality, 3D and 2D},
author = {Mara, Martina and Stein, Jan-Philipp and Latoschik, Marc Erich and Lugrin, Birgit and Schreiner, Constanze and Hostettler, Rafael and Appel, Markus},
editor = {and, },
journal = {Frontiers in Psychology - Human-Media Interaction},
year = {2021},
volume = {12},
pages = {1152},
url = {https://www.frontiersin.org/article/10.3389/fpsyg.2021.633178},
doi = {10.3389/fpsyg.2021.633178}
}
Abstract: Humanoid robots (i.e., robots with a human-like body) are projected to be mass marketed in the future in several fields of
application. Today, however, user evaluations of humanoid robots are often based on mediated depictions rather than actual observations or interactions with a robot, which holds true not least for scientific user studies. People can be confronted with robots in various modes of presentation, among them (1) 2D videos, (2) 3D, i.e., stereoscopic videos, (3) immersive Virtual Reality (VR), or (4) live on site. A systematic investigation into how such differential modes of presentation influence user perceptions of a robot is still lacking. Thus, the current study systematically compares the effects of different presentation modes with varying immersive potential on user evaluations of a humanoid service robot. Participants (N = 120) observed an interaction between a humanoid service robot and an actor either on 2D or 3D video, via a virtual reality headset (VR) or live. We found support for the expected effect of the presentation mode on perceived immediacy. Effects regarding the degree of human likeness that was attributed to the robot were mixed. The presentation mode had no influence on evaluations in terms of eeriness, likability, and purchase intentions. Implications for empirical research on humanoid robots and practice are discussed.
Florian Kern, Thore Keser, Florian Niebling, Marc Erich Latoschik,
Using Hand Tracking and Voice Commands to Physically Align Virtual Surfaces in AR for Handwriting and Sketching with HoloLens 2
, In
27th ACM Symposium on Virtual Reality Software and Technology
, pp. 1-3
.
Osaka, Japan
:
Association for Computing Machinery
, 2021.
Best poster award. 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kern2021using,
title = {Using Hand Tracking and Voice Commands to Physically Align Virtual Surfaces in AR for Handwriting and Sketching with HoloLens 2},
author = {Kern, Florian and Keser, Thore and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {27th ACM Symposium on Virtual Reality Software and Technology},
year = {2021},
pages = {1-3},
publisher = {Association for Computing Machinery},
address = {Osaka, Japan},
note = {Best poster award. 🏆},
url = {https://doi.org/10.1145/3489849.3489940},
doi = {10.1145/3489849.3489940}
}
Abstract: In this paper, we adapt an existing VR framework for handwriting and sketching on physically aligned virtual surfaces to AR environments using the Microsoft HoloLens 2. We demonstrate a multimodal input metaphor to control the framework’s calibration features using hand tracking and voice commands. Our technical evaluation of fingertip/surface accuracy and precision on physical tables and walls is in line with existing measurements on comparable hardware, albeit considerably lower compared to previous work using controller-based VR devices. We discuss design considerations and the benefits of our unified input metaphor suitable for controller tracking and hand tracking systems. We encourage extensions and replication by providing a publicly available reference implementation (https://go.uniwue.de/hci-otss-hololens).
Dominik Gall, Daniel Roth, Jan-Philipp Stauffert, Julian Zarges, Marc Erich Latoschik,
Virtual Body Ownership Intensifies Emotional Responses to Virtual Stimuli
, In
Frontiers in Psychology - Cognitive Science
Steve Richard DiPaola, Ulysses Bernardet, Jonathan Gratch (Eds.),
, Vol.
12
, p. 3833
.
2021.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{gall2021virtual,
title = {Virtual Body Ownership Intensifies Emotional Responses to Virtual Stimuli},
author = {Gall, Dominik and Roth, Daniel and Stauffert, Jan-Philipp and Zarges, Julian and Latoschik, Marc Erich},
editor = {DiPaola, Steve Richard and Bernardet, Ulysses and Gratch, Jonathan},
journal = {Frontiers in Psychology - Cognitive Science},
year = {2021},
volume = {12},
pages = {3833},
url = {https://www.frontiersin.org/article/10.3389/fpsyg.2021.674179},
doi = {10.3389/fpsyg.2021.674179}
}
Abstract: Modulating emotional responses to virtual stimuli is a fundamental goal of many immersive interactive applications. In this study, we leverage the illusion of illusory embodiment and show that owning a virtual body provides means to modulate emotional responses. In a single-factor repeated-measures experiment, we manipulated the degree of illusory embodiment and assessed the emotional responses to virtual stimuli. We presented emotional stimuli in the same environment as the virtual body. Participants experienced higher arousal, dominance, and more intense valence in the high embodiment condition compared to the low embodiment condition. The illusion of embodiment thus intensifies the emotional processing of the virtual environment. This result suggests that artificial bodies can increase the effectiveness of immersive applications psychotherapy, entertainment, computer-mediated social interactions, or health applications.
2020
Niko Wißmann, Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
A Low-Cost Approach to Fish Tank Virtual Reality with Semi-Automatic Calibration Support
, In
2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 598-599
.
IEEE
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wissmann2020lowcost,
title = {A Low-Cost Approach to Fish Tank Virtual Reality with Semi-Automatic Calibration Support},
author = {Wißmann, Niko and Mišiak, Martin and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2020},
pages = {598-599},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-fishtankVR-preprint.pdf},
doi = {10.1109/VRW50115.2020.00150}
}
Abstract: We describe the components and implementation of a cost-effective fish tank virtual reality system. It is based on commodity hardware and provides accurate view tracking combined with high resolution stereoscopic rendering. The system is calibrated very quickly in a semi-automatic step using computer vision. By avoiding the resolution disadvantages of current VR headsets, our prototype is suitable for a wide range of perceptual VR studies.
Niko Wißmann, Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing
, In
Proceedings of the 27th IEEE Virtual Reality conference (IEEE VR '20)
.
IEEE
, 2020.
Best paper award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wissmann2020accelerated,
title = {Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing},
author = {Wißmann, Niko and Mišiak, Martin and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th IEEE Virtual Reality conference (IEEE VR '20)},
year = {2020},
publisher = {IEEE},
note = {Best paper award 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-stereo-rendering-preprint.pdf}
}
Abstract: Stereoscopic rendering is a prominent feature of virtual reality applications to generate depth cues and to provide depth perception in the virtual world. However, straight-forward stereo rendering methods usually are expensive since they render the scene from two eye-points which in general doubles the frame times. This is particularly problematic since virtual reality sets high requirements for real-time capabilities and image resolution. Hence, this paper presents a hybrid rendering system that combines classic rasteriza- tion and real-time ray-tracing to accelerate stereoscopic rendering. The system reprojects the pre-rendered left half of the stereo image pair into the right perspective using a forward grid warping technique and identifies resulting reprojection errors, which are then efficiently resolved by adaptive real-time ray-tracing. A final analysis shows that the system achieves a significant performance gain, has a neg- ligible quality impact, and is suitable even for higher rendering resolutions.
Erik Wolf, Nina Döllinger, David Mal, Carolin Wienrich, Mario Botsch, Marc Erich Latoschik,
Body Weight Perception of Females using Photorealistic Avatars in Virtual and Augmented Reality
, In
2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 462-473
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2020bodyweight,
title = {Body Weight Perception of Females using Photorealistic Avatars in Virtual and Augmented Reality},
author = {Wolf, Erik and Döllinger, Nina and Mal, David and Wienrich, Carolin and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2020},
pages = {462-473},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ismar-body-weight-perception-vr-ar-preprint.pdf},
doi = {10.1109/ISMAR50242.2020.00071}
}
Abstract: The appearance of avatars can potentially alter changes in their users' perception and behavior. Based on this finding, approaches to support the therapy of body perception disturbances in eating or body weight disorders by mixed reality (MR) systems gain in importance. However, the methodological heterogeneity of previous research has made it difficult to assess the suitability of different MR systems for therapeutic use in these areas. The effects of MR system properties and related psychometric factors on body-related perceptions have so far remained unclear. We developed an interactive virtual mirror embodiment application to investigate the differences between an augmented reality see-through head-mounted-display (HMD) and a virtual reality HMD on the before-mentioned factors. Additionally, we considered the influence of the participant's body-mass-index (BMI) and the BMI difference between participants and their avatars on the estimations. The 54 normal-weight female participants significantly underestimated the weight of their photorealistic, generic avatar in both conditions. Body weight estimations were significantly predicted by the participants' BMI and the BMI difference. We also observed partially significant differences in presence and tendencies for differences in virtual body ownership between the systems. Our results offer new insights into the relationships of body weight perception in different MR environments and provide new perspectives for the development of therapeutic applications.
Chris Zimmerer, Ronja Heinrich, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis
, In
26th ACM Symposium on Virtual Reality Software and Technology
.
New York, NY, USA
:
Association for Computing Machinery
, 2020.
Best Poster Award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3385956.3422089,
title = {Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis},
author = {Zimmerer, Chris and Heinrich, Ronja and Fischbach, Martin and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {26th ACM Symposium on Virtual Reality Software and Technology},
year = {2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
note = {Best Poster Award 🏆},
url = {https://doi.org/10.1145/3385956.3422089},
doi = {10.1145/3385956.3422089}
}
Abstract: This paper introduces a method for computing the difficulty of selection tasks in virtual environments using pointing metaphors by operationalizing an established human motor behavior model. In contrast to previous work, the difficulty is calculated automatically at run-time for arbitrary environments. We present and provide the implementation of our method within Unity 3D. The difficulty is computed based on a contextual analysis of spatial boundary conditions, i.e., target object size and shape, distance to the user, and occlusion. We believe our method will enable developers to build adaptive systems that automatically equip the user with the most appropriate selection technique according to the context. Further, it provides a standard metric to better evaluate and compare different selection techniques.
Daniel Roth, Marc Erich Latoschik,
Construction of the Virtual Embodiment Questionnaire (VEQ)
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
26
(
12)
, pp. 3546-3556
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{9199571,
title = {Construction of the Virtual Embodiment Questionnaire (VEQ)},
author = {Roth, Daniel and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2020},
volume = {26},
number = {12},
pages = {3546-3556},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9199571},
doi = {10.1109/TVCG.2020.3023603}
}
Abstract: User embodiment is important for many virtual reality (VR) applications, for example, in the context of social interaction, therapy, training, or entertainment. However, there is no data-driven and validated instrument to empirically measure the perceptual aspects of embodiment, necessary to reliably evaluate this important phenomenon. To provide a method to assess components of virtual embodiment in a reliable and consistent fashion, we constructed a Virtual Embodiment Questionnaire (VEQ). We reviewed previous literature to identify applicable constructs and questionnaire items, and performed a confirmatory factor analysis (CFA) on the data from three experiments (N = 196). The analysis confirmed three factors: (1) ownership of a virtual body, (2) agency over a virtual body, and (3) the perceived change in the body schema. A fourth study (N = 22) was conducted to confirm the reliability and validity of the scale, by investigating the impacts of latency and latency jitter present in the simulation. We present the proposed scale and study results and discuss resulting implications.
Wienrich Carolin, Maria Eisenmann, Marc Erich Latoschik, Silke Grafe,
CoTeach – Connected Teacher Education
, In
Boosting Virtual Reality in Learning
Michael Schwaiger (Ed.),
.
E.N.T.E.R.
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@incollection{carolin2020coteach,
title = {CoTeach – Connected Teacher Education},
author = {Carolin, Wienrich and Eisenmann, Maria and Latoschik, Marc Erich and Grafe, Silke},
editor = {Schwaiger, Michael},
booktitle = {Boosting Virtual Reality in Learning},
year = {2020},
publisher = {E.N.T.E.R.},
url = {https://www.enter-network.eu/3d-flip-book/focus-europe-vrinsight-greenpaper/}
}
Abstract: CoTeach develops and evaluates innovative teaching and learning contexts for student teachers and scholars. One work package couples the potential of VR with principles of intercultural learning to create tangible experiences with pedagogically responsible value
Elisabeth Ganal, Andrea Bartl, Franziska Westermeier, Daniel Roth, Marc Erich Latoschik,
Developing a Study Design on the Effects of Different Motion Tracking Approaches on the User Embodiment in Virtual Reality
, In
Mensch und Computer 2020
C. Hansen, A. Nürnberger, B. Preim (Eds.),
.
Gesellschaft für Informatik e.V.
, 2020.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{https://doi.org/10.18420/muc2020-ws134-341,
title = {Developing a Study Design on the Effects of Different Motion Tracking Approaches on the User Embodiment in Virtual Reality},
author = {Ganal, Elisabeth and Bartl, Andrea and Westermeier, Franziska and Roth, Daniel and Latoschik, Marc Erich},
editor = {Hansen, C. and Nürnberger, A. and Preim, B.},
journal = {Mensch und Computer 2020},
year = {2020},
publisher = {Gesellschaft für Informatik e.V.},
url = {https://dl.gi.de/bitstream/handle/20.500.12116/33557/muc2020-ws-341.pdf?sequence=1&isAllowed=y},
doi = {10.18420/MUC2020-WS134-341}
}
Chris Zimmerer, Erik Wolf, Sara Wolf, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality
, In
2020 International Conference on Multimodal Interaction
, p. 222–231
.
2020.
Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1145/3382507.3418850,
title = {Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality},
author = {Zimmerer, Chris and Wolf, Erik and Wolf, Sara and Fischbach, Martin and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {2020 International Conference on Multimodal Interaction},
year = {2020},
pages = {222–231},
note = {Best Paper Nominee 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-icmi-1169-preprint.pdf},
doi = {10.1145/3382507.3418850}
}
Abstract: Multimodal Interfaces (MMIs) have been considered to provide promising interaction paradigms for Virtual Reality (VR) for some time. However, they are still far less common than unimodal interfaces (UMIs). This paper presents a summative user study comparing an MMI to a typical UMI for a design task in VR. We developed an application targeting creative 3D object manipulations, i.e., creating 3D objects and modifying typical object properties such as color or size. The associated open user task is based on the Torrence Tests of Creative Thinking. We compared a synergistic multimodal interface using speech-accompanied pointing/grabbing gestures with a more typical unimodal interface using a hierarchical radial menu to trigger actions on selected objects. Independent judges rated the creativity of the resulting products using the Consensual Assessment Technique. Additionally, we measured the creativity-promoting factors flow, usability, and presence. Our results show that the MMI performs on par with the UMI in all measurements despite its limited flexibility and reliability. These promising results demonstrate the technological maturity of MMIs and their potential to extend traditional interaction techniques in VR efficiently.
Jan-Philipp Stauffert, Florian Niebling, Jean-Luc Lugrin, Marc Erich Latoschik,
Guided Sine Fitting for Latency Estimation in Virtual Reality
, In
2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
, pp. 707-708
.
2020.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{stauffert2020guided,
title = {Guided Sine Fitting for Latency Estimation in Virtual Reality},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2020},
pages = {707--708},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-poster-auto-sine-preprint.pdf}
}
Sebastian Oberdörfer, Anne Elsässer, David Schraudt, Silke Grafe, Marc Erich Latoschik,
Horst – The Teaching Frog: Learning the Anatomy of a Frog Using Tangible AR
, In
Proceedings of the 2020 Mensch und Computer Conference (MuC '20)
, pp. 303-307
.
New York, NY, USA
:
Association for Computing Machinery
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2020horst,
title = {Horst – The Teaching Frog: Learning the Anatomy of a Frog Using Tangible AR},
author = {Oberdörfer, Sebastian and Elsässer, Anne and Schraudt, David and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {Proceedings of the 2020 Mensch und Computer Conference (MuC '20)},
year = {2020},
pages = {303-307},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-muc-frog-ar-preprint.pdf},
doi = {10.1145/3404983.3410007}
}
Abstract: Learning environments targeting Augmented Reality (AR) visualize complex facts, can increase a learner's motivation, and allow for the application of learning contents. When using tangible user interfaces, the learning process receives a physical aspect improving the overall intuitive use. We present a tangible AR system targeting the learning of a frog's anatomy. The learning environment is based on a plushfrog containing removable markers. Detecting the markers, replaces them with 3D models of the organs. By extracting individual organs, learners can inspect them up close and learn more about their functions. Our AR frog further includes a quiz for a self-assessment of the learning progress and a gamification system to raise the overall motivation.
Daniel Schlör, Albin Zehe, Konstantin Kobs, Blerta Veseli, Franziska Westermeier, Larissa Brübach, Daniel Roth, Marc Erich Latoschik, Andreas Hotho,
Improving Sentiment Analysis with Biofeedback Data
, In
Proceedings of the Workshop on peOple in laNguage, vIsiOn and the miNd (ONION)
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{schlor2020improving,
title = {Improving Sentiment Analysis with Biofeedback Data},
author = {Schlör, Daniel and Zehe, Albin and Kobs, Konstantin and Veseli, Blerta and Westermeier, Franziska and Brübach, Larissa and Roth, Daniel and Latoschik, Marc Erich and Hotho, Andreas},
booktitle = {Proceedings of the Workshop on peOple in laNguage, vIsiOn and the miNd (ONION)},
year = {2020},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-lugrin-vr-teacher-training/2020-onion-sentiment-eeg-preprint.pdf}
}
Abstract: Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Latency and Cybersickness: Impact, Causes, and Measures. A Review
, In
Frontiers in Virtual Reality
, Vol.
1
, p. 31
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{stauffert:2020b,
title = {Latency and Cybersickness: Impact, Causes, and Measures. A Review},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2020},
volume = {1},
pages = {31},
url = {https://www.frontiersin.org/article/10.3389/frvir.2020.582204},
doi = {10.3389/frvir.2020.582204}
}
Abstract: Latency is a key characteristic inherent to any computer system. Motion-to-Photon (MTP) latency describes the time between the movement of a tracked object and its corresponding movement rendered and depicted by computer-generated images on a graphical output screen. High MTP latency can cause a loss of performance in interactive graphics applications and, even worse, can provoke cybersickness in Virtual Reality (VR) applications. Here, cybersickness can degrade VR experiences or may render the experiences completely unusable. It can confound research findings of an otherwise sound experiment. Latency as a contributing factor to cybersickness needs to be properly understood. Its effects need to be analyzed, its sources need to be identified, good measurement methods need to be developed, and proper counter measures need to be developed in order to reduce potentially harmful impacts of latency on the usability and safety of VR systems. Research shows that latency can exhibit intricate timing patterns with various spiking and periodic behavior. These timing behaviors may vary, yet most are found to provoke cybersickness. Overall, latency can differ drastically between different systems interfering with generalization of measurement results. This review article describes the causes and effects of latency with regard to cybersickness. We report on different existing approaches to measure and report latency. Hence, the article provides readers with the knowledge to understand and report latency for their own applications, evaluations, and experiments. It should also help to measure, identify, and finally control and counteract latency and hence gain confidence into the soundness of empirical data collected by VR exposures. Low latency increases the usability and safety of VR systems.
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Metachron: A framework for time perception research in VR
, In
Proceedings of the 26th ACM Conference on Virtual Reality Software and Technology
.
2020.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{Landeck2020Metachron,
title = {Metachron: A framework for time perception research in VR},
author = {Landeck, Maximilian and Unruh, Fabian and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {Proceedings of the 26th ACM Conference on Virtual Reality Software and Technology},
year = {2020},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-vrst-metachron-preprint.pdf}
}
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Preservice Teachers' encounter with Social VR – Exploring Virtual Teaching and Learning Processes in Initial Teacher Education
, In
SITE Interactive Conference
Elizabeth Langran (Ed.),
, pp. 549-562
.
Association for the Advancement of Computing in Education (AACE)
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{ripka2020preservice,
title = {Preservice Teachers' encounter with Social VR – Exploring Virtual Teaching and Learning Processes in Initial Teacher Education},
author = {Ripka, Gabriela and Grafe, Silke and Latoschik, Marc Erich},
editor = {Langran, Elizabeth},
booktitle = {SITE Interactive Conference},
year = {2020},
pages = {549--562},
publisher = {Association for the Advancement of Computing in Education (AACE)},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-site-preservice-teacher-svr.pdf}
}
Abstract: With 21st century challenges ahead, higher education teaching and learning need new pedagogical concepts. Technologies like social VR enable student-centered, action-oriented, and situated learning. This paper presents findings of the pedagogical implementation of a distributed social VR prototype, a fully immersive VR learning environment, into an Initial Teacher Education program in Germany. The exploratory study addressed the following research questions: 1) How do preservice teachers perceive teaching and learning activities in fully immersive VR and 2) how should teaching and learning processes using social VR in Teacher Education be designed? It followed a design-based research approach. The pedagogical concept for teaching and learning in social VR was based on principles of action-orientation. A convenience sample of three groups of five students each took part in a 90-minute teaching and learning scenario using a fully immersive VR learning environment. During these seminar units, students engaged in qualitative group interviews and shared their perception of the action-oriented teaching and learning activities in VR. The results showed that preservice teachers had the feeling of being less distracted in social VR. Additionally, during group activities, missing social and behavioral cues made communication procedures more challenging for participants. However, some participants noticed a stronger sense of community while collaborating with others.
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Preservice Teachers' encounter with Social VR--Exploring Virtual Teaching and Learning Processes in Initial Teacher Education
, In
SITE Interactive Conference
, pp. 549-562
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{ripka2020preservice,
title = {Preservice Teachers' encounter with Social VR--Exploring Virtual Teaching and Learning Processes in Initial Teacher Education},
author = {Ripka, Gabriela and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {SITE Interactive Conference},
year = {2020},
pages = {549--562},
url = {}
}
Abstract: With 21st century challenges ahead, higher education teaching and learning need new pedagogical concepts. Technologies like social VR enable student-centered, action-oriented, and situated learning. This paper presents findings of the pedagogical implementation of a distributed social VR prototype, a fully immersive VR learning environment, into an Initial Teacher Education program in Germany. The exploratory study addressed the following research questions: 1) How do preservice teachers perceive teaching and learning activities in fully immersive VR and 2) how should teaching and learning processes using social VR in Teacher Education be designed? It followed a design-based research approach. The pedagogical concept for teaching and learning in social VR was based on principles of action-orientation. A convenience sample of three groups of five students each took part in a 90-minute teaching and learning scenario using a fully immersive VR learning environment. During these seminar units, students engaged in qualitative group interviews and shared their perception of the action-oriented teaching and learning activities in VR. The results showed that preservice teachers had the feeling of being less distracted in social VR. Additionally, during group activities, missing social and behavioral cues made communication procedures more challenging for participants. However, some participants noticed a stronger sense of community while collaborating with others.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Paul Cagniat, Cédric Buche, Marc Erich Latoschik,
Pushing Out the Classroom Walls: A Scalability Benchmark for a Virtual Audience Behaviour Model in Virtual Reality
.
Gesellschaft für Informatik e.V.
, 2020.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@article{https://doi.org/10.18420/muc2020-ws134-337,
title = {Pushing Out the Classroom Walls: A Scalability Benchmark for a Virtual Audience Behaviour Model in Virtual Reality},
author = {Glémarec, Yann and Lugrin, Jean-Luc and Bosser, Anne-Gwenn and Cagniat, Paul and Buche, Cédric and Latoschik, Marc Erich},
year = {2020},
publisher = {Gesellschaft für Informatik e.V.},
url = {http://dl.gi.de/handle/20.500.12116/33554},
doi = {10.18420/MUC2020-WS134-337}
}
Stephan Wenninger, Jascha Achenbach, Andrea Bartl, Marc Erich Latoschik, Mario Botsch,
Realistic Virtual Humans from Smartphone Videos.
, In
VRST
Robert J. Teather, Chris Joslin, Wolfgang Stuerzlinger, Pablo Figueroa, Yaoping Hu, Anil Ufuk Batmaz, Wonsook Lee, Francisco Ortega (Eds.),
, pp. 29:1-29:11
.
ACM
, 2020.
Best paper award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{conf/vrst/WenningerABLB20,
title = {Realistic Virtual Humans from Smartphone Videos.},
author = {Wenninger, Stephan and Achenbach, Jascha and Bartl, Andrea and Latoschik, Marc Erich and Botsch, Mario},
editor = {Teather, Robert J. and Joslin, Chris and Stuerzlinger, Wolfgang and Figueroa, Pablo and Hu, Yaoping and Batmaz, Anil Ufuk and Lee, Wonsook and Ortega, Francisco},
booktitle = {VRST},
year = {2020},
pages = {29:1-29:11},
publisher = {ACM},
note = {Best paper award 🏆},
url = {https://dl.acm.org/doi/pdf/10.1145/3385956.3418940}
}
Abstract: This paper introduces an automated 3D-reconstruction method for generating high-quality virtual humans from monocular smartphone cameras. The input of our approach are two video clips, one capturing the whole body and the other providing detailed close-ups of head and face. Optical flow analysis and sharpness estimation select individual frames, from which two dense point clouds for the body and head are computed using multi-view reconstruction. Automatically detected landmarks guide the fitting of a virtual human body template to these point clouds, thereby reconstructing the geometry. A graph-cut stitching approach reconstructs a detailed texture. Our results are compared to existing low-cost monocular approaches as well as to expensive multi-camera scan rigs. We achieve visually convincing reconstructions that are almost on par with complex camera rigs while surpassing similar low-cost approaches. The generated high-quality avatars are ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity
Dominik Gall, Jan Preßler, Jörn Hurtienne, Marc Erich Latoschik,
Self-organizing knowledge management might improve the quality of person-centered dementia care: A qualitative study
, In
International Journal of Medical Informatics
, Vol.
139
, p. 104132
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{gall2020selforganizing,
title = {Self-organizing knowledge management might improve the quality of person-centered dementia care: A qualitative study},
author = {Gall, Dominik and Preßler, Jan and Hurtienne, Jörn and Latoschik, Marc Erich},
journal = {International Journal of Medical Informatics},
year = {2020},
volume = {139},
pages = {104132},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-gall-self-organizing-kowledge-management.pdf},
doi = {10.1016/j.ijmedinf.2020.104132}
}
Abstract: Background: In institutional dementia care, person-centered care improves care processes and the quality of life of residents. However, communication gaps impede the implementation of person-centered care in favor of routinized care.
Objective: We evaluated whether self-organizing knowledge management reduces communication gaps and improves the quality of person-centered dementia care.
Method: We implemented a self-organizing knowledge management system. Eight significant others of residents with severe dementia and six professional caregivers used a mobile application for six months. We conducted qualitative interviews and focus groups afterward.
Main findings: Participants reported that the system increased the quality of person-centered care, reduced communication gaps, increased the task satisfaction of caregivers and the wellbeing of significant others.
Conclusions: Based on our findings, we develop the following hypotheses: Self-organizing knowledge management might provide a promising tool to improve the quality of person-centered care. It might reduce communication barriers that impede person-centered care. It might allow transferring content-maintaining tasks from caregivers to significant others. Such distribution of tasks, in turn, might be beneficial for both parties. Furthermore, shared knowledge about situational features might guide person-centered interventions.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Simultaneous Run-Time Measurement of Motion-to-Photon Latency and Latency Jitter
, In
2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 636-644
.
2020.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{stauffert2020simultaneous,
title = {Simultaneous Run-Time Measurement of Motion-to-Photon Latency and Latency Jitter},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
year = {2020},
pages = {636--644},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-latency-preprint.pdf}
}
P. Ziebell, J. Stümpfig, M. Eidel, S. C. Kleih, A. Kübler, M. E. Latoschik, S. Halder,
Stimulus modality influences session-to-session transfer of training effects in auditory and tactile streaming-based P300 brain–computer interfaces
, In
Scientific Reports
, Vol.
10
(
1)
, pp. 11873-
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{ziebell2020stimulus,
title = {Stimulus modality influences session-to-session transfer of training effects in auditory and tactile streaming-based P300 brain–computer interfaces},
author = {Ziebell, P. and Stümpfig, J. and Eidel, M. and Kleih, S. C. and Kübler, A. and Latoschik, M. E. and Halder, S.},
journal = {Scientific Reports},
year = {2020},
volume = {10},
number = {1},
pages = {11873--},
url = {https://doi.org/10.1038/s41598-020-67887-6},
doi = {10.1038/s41598-020-67887-6}
}
Abstract: Despite recent successes, patients suffering from locked-in syndrome (LIS) still struggle to communicate using vision-independent brain–computer interfaces (BCIs). In this study, we compared auditory and tactile BCIs, regarding training effects and cross-stimulus-modality transfer effects, when switching between stimulus modalities. We utilized a streaming-based P300 BCI, which was developed as a low workload approach to prevent potential BCI-inefficiency. We randomly assigned 20 healthy participants to two groups. The participants received three sessions of training either using an auditory BCI or using a tactile BCI. In an additional fourth session, BCI versions were switched to explore possible cross-stimulus-modality transfer effects. Both BCI versions could be operated successfully in the first session by the majority of the participants, with the tactile BCI being experienced as more intuitive. Significant training effects were found mostly in the auditory BCI group and strong evidence for a cross-stimulus-modality transfer occurred for the auditory training group that switched to the tactile version but not vice versa. All participants were able to control at least one BCI version, suggesting that the investigated paradigms are generally feasible and merit further research into their applicability with LIS end-users. Individual preferences regarding stimulus modality should be considered.
Gabriela Ripka, Jennifer Tiede, Silke Grafe, Marc Erich Latoschik,
Teaching and Learning Processes in Immersive VR – Comparing Expectations of Preservice Teachers and Teacher Educators
, In
Society for Information Technology & Teacher Education (SITE) International Conference
, pp. 1863-1871
.
Association for the Advancement of Computing in Education (AACE)
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{ripka2020teaching,
title = {Teaching and Learning Processes in Immersive VR – Comparing Expectations of Preservice Teachers and Teacher Educators},
author = {Ripka, Gabriela and Tiede, Jennifer and Grafe, Silke and Latoschik, Marc Erich},
booktitle = {Society for Information Technology & Teacher Education (SITE) International Conference},
year = {2020},
pages = {1863-1871},
publisher = {Association for the Advancement of Computing in Education (AACE)},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-ripka-teaching-and-learning-processes-ivr-preprint.pdf}
}
Abstract: The usage of VR in higher education is not uncommon anymore. However, concepts are mainly still focusing on technical rather than pedagogical aspects of VR in the classroom. The exploration of the expectations of teacher educators as well as of preservice teachers appears indispensable (1) to achieve a sound understanding of requirements, (2) to identify potential design spaces, and finally (3) to create and to derive suitable pedagogical approaches for VR in initial teacher education. This paper presents results of guideline-based qualitative interviews comparing the expectations of teacher educators and of preservice teachers regarding teaching and learning in immersive virtual learning environments. The results showed that preservice teachers and teacher educators expect VR to enrich classes through interactive engagement in situations that would otherwise be too costly or dangerous. Regarding the design, teacher educators put the emphasis on functionality. Student teachers emphasized that they do not want to miss social interactions with their peers. Furthermore, both groups stated preferred modes of collaboration and interaction taking into account the characteristics of a virtual learning surrounding such as being able to use diverse learning spaces for group work. Interviewees agreed on two vital factors for effective learning and teaching processes: flexibility and the possibility of customization considering technical properties that are to deal with. Apart from this, preservice teachers emphasized strongly their worries about data usage and the ethics regarding using avatars and agents for representation.
Sebastian Oberdörfer, David Heidrich, Marc Erich Latoschik,
Think Twice: The Influence of Immersion on Decision Making during Gambling in Virtual Reality
, In
Proceedings of the 27th IEEE Virtual Reality conference (VR '20)
, pp. 483-492
.
Atlanta, USA
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2020think,
title = {Think Twice: The Influence of Immersion on Decision Making during Gambling in Virtual Reality},
author = {Oberdörfer, Sebastian and Heidrich, David and Latoschik, Marc Erich},
booktitle = {Proceedings of the 27th IEEE Virtual Reality conference (VR '20)},
year = {2020},
pages = {483-492},
address = {Atlanta, USA},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-oberdoerfer-think-twice.pdf},
doi = {10.1109/VR46266.2020.00-35}
}
Abstract: Immersive Virtual Reality (VR) is increasingly being explored as an alternative medium for gambling games to attract players. Typically, gambling games try to impair a player’s decision making, usually for the disadvantage of the players’ financial outcome. An impaired decision making results in the inability to differentiate between advantageous and disadvantageous options. We investigated if and how immersion impacts decision making using a VR-based realization of the Iowa Gambling Task (IGT) to pinpoint potential risks and effects of gambling in VR. During the IGT, subjects are challenged to draw cards from four different decks of which two are advantageous. The selections made serve as a measure of a participant’s decision making during the task. In a novel user study, we compared the effects of immersion on decision making between a low-immersive desktop-3D-based IGT realization and a high immersive VR version. Our results revealed significantly more disadvantageous decisions when playing the immersive VR version. This indicates an impair- ing effect of immersion on simulated real life decision making and provides empirical evidence for a high risk potential of gambling games targeting immersive VR.
Stefan Lindner, Marc-Erich Latoschik, Heike Rittner,
Virtual Reality als Baustein in der Behandlung akuter und chronischer Schmerzen
, In
AINS-Anästhesiologie· Intensivmedizin· Notfallmedizin· Schmerztherapie
, Vol.
55
(
09)
, pp. 549-561
.
Georg Thieme Verlag KG
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{lindner2020virtual,
title = {Virtual Reality als Baustein in der Behandlung akuter und chronischer Schmerzen},
author = {Lindner, Stefan and Latoschik, Marc-Erich and Rittner, Heike},
journal = {AINS-Anästhesiologie· Intensivmedizin· Notfallmedizin· Schmerztherapie},
year = {2020},
volume = {55},
number = {09},
pages = {549--561},
publisher = {Georg Thieme Verlag KG},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ains-vr-chronischer-schmerz-preprint.pdf}
}
Abstract: Schmerzbehandlung zählt zu den täglichen Routinen klinischer Anästhesisten.
Im Rahmen eines wohlüberlegten Einsatzes von Schmerzmedikamenten sind Alternativen zur medikamentösen Schmerztherapie notwendig. Virtual Reality (VR) konnte sich in den letzten Jahren durch immer kostengünstigere und bessere Technologien als realistische Ergänzung etablieren. Möglichkeiten der VR sowie Indikationen und Kontraindikationen werden aufgezeigt.
Dominik Gall, Marc Erich Latoschik,
Visual angle modulates affective responses to audiovisual stimuli
, In
Computers in Human Behavior
, Vol.
109
, p. 106346
.
2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{gall2020visual,
title = {Visual angle modulates affective responses to audiovisual stimuli},
author = {Gall, Dominik and Latoschik, Marc Erich},
journal = {Computers in Human Behavior},
year = {2020},
volume = {109},
pages = {106346},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-visual-angle-gall-latoschik.pdf},
doi = {https://doi.org/10.1016/j.chb.2020.106346}
}
Abstract: What we see influences our emotions.
Technology often mediates the visual content we perceive.
Visual angle is an essential parameter of how we see such content.
It operationalizes visible properties of human-computer interfaces.
However, we know little about the content-independent effect of visual angle on emotional responses to audiovisual stimuli.
We show that visual angle alone affects emotional responses to audiovisual features, independent of object perception.
We conducted a 2 x 2 x 3 factorial repeated-measures experiment with 143 undergraduate students.
We simultaneously presented monochrome rectangles with pure tones and assessed valence, arousal, and dominance.
In the high visual angle condition, arousal increased, valence and dominance decreased, and lightness modulated arousal.
In the low visual angle condition, pitch modulated arousal, and lightness affected valence.
Visual angle weights the affective relevance of perception modalities independent of spatial representations.
Visual angle serves as an early-stage perceptual feature for organizing emotional responses.
Control of this presentation layer allows for provoking or avoiding emotional response where intended.
Kristina Bucher, Sebastian Oberdörfer, Silke Grafe, Marc Erich Latoschik,
Von Medienbeiträgen und Applikationen - ein interdisziplinäres Konzept zum Lehren und Lernen mit Augmented und Virtual Reality für die Hochschullehre
, In
Schnittstellen und Interfaces - Digitaler Wandel in Bildungseinrichtungen
Thomas Knaus, Olga Merz (Eds.),
, Vol.
7
, pp. 225-238
.
Munich, Germany
:
kopaed
, 2020.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@incollection{bucher2020medienbeitrgen,
title = {Von Medienbeiträgen und Applikationen - ein interdisziplinäres Konzept zum Lehren und Lernen mit Augmented und Virtual Reality für die Hochschullehre},
author = {Bucher, Kristina and Oberdörfer, Sebastian and Grafe, Silke and Latoschik, Marc Erich},
editor = {Knaus, Thomas and Merz, Olga},
booktitle = {Schnittstellen und Interfaces - Digitaler Wandel in Bildungseinrichtungen},
year = {2020},
volume = {7},
pages = {225-238},
publisher = {kopaed},
address = {Munich, Germany},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-framediale-medienbeitraege-preprint.pdf}
}
Abstract: Augmented Reality (AR) und Virtual Reality (VR) finden zunehmend Eingang in die Bildungspraxis. Mit ihrem Einsatz sind sowohl Potentiale als auch mögliche Problemlagen für Lehr- und Lernprozesse verbunden. Daher ist es Aufgabe der Lehrerbildung, (angehenden) Lehrpersonen einen Kompetenzerwerb für die Einbindung von AR und VR in Lehr- und Lernprozesse zu ermöglichen. Vor diesem Hintergrund wurde ein interdisziplinäres Konzept für die Hochschullehre entwickelt und hinsichtlich der Zielerreichung empirisch evaluiert. Im Beitrag werden zunächst bedeutsame Gestaltungsaspekte des Konzepts sowie erste Befunde aus einer Pilotuntersuchung vorgestellt. Im Anschluss werden handlungspraktische Erfahrungen der interdisziplinären Zusammenarbeit reflektiert und diskutiert.
2019
Erik Wolf, Sara Klüber, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik,
"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR
, In
2019 International Conference on Multimodal Interaction
, pp. 195-204
.
2019.
Best Paper Runner-Up 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2019paint,
title = {"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR},
author = {Wolf, Erik and Klüber, Sara and Zimmerer, Chris and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {2019 International Conference on Multimodal Interaction},
year = {2019},
pages = {195-204},
note = {Best Paper Runner-Up 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-icmi-creativity-in-vr.pdf},
doi = {10.1145/3340555.3353724}
}
Abstract: Virtual reality (VR) has always been considered a promising medium to support designers with alternative work environments. Still, graphical user interfaces are prone to induce attention shifts between the user interface and the manipulated target objects which hampers the creative process. This work proposes a speech-and-gesture-based interaction paradigm for creative tasks in VR. We developed a multimodal toolbox (MTB) for VR-based design applications and compared it to a typical unimodal menu-based toolbox (UTB). The comparison uses a design-oriented use-case and mea-sures flow, usability, and presence as relevant characteristicsfor a VR-based design process. The multimodal approach (1) led to a lower perceived task duration and a higher reported feeling of flow. It (2) provided a higher intuitive use and a lower mental workload while not being slower than an UTB. Finally, it (3) generated a higher feeling of presence. Overall, our results confirm significant advantages of the proposed multimodal interaction paradigm and the developed MTB for important characteristics of design processes in VR.
Jean-Luc Lugrin, Florian Kern, Constantin Kleinbeck, Daniel Roth, Christian Daxery, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik,
A Framework for Location-Based VR Applications
, In
Proceedings of the GI VR/AR - Workshop
.
Shaker Verlag
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{LugrinHolopark2019,
title = {A Framework for Location-Based VR Applications},
author = {Lugrin, Jean-Luc and Kern, Florian and Kleinbeck, Constantin and Roth, Daniel and Daxery, Christian and Feigl, Tobias and Mutschler, Christopher and Latoschik, Marc Erich},
booktitle = {Proceedings of the GI VR/AR - Workshop},
year = {2019},
publisher = {Shaker Verlag},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-gi-vr-ar-framework-for-location-based-vr-applications.pdf},
doi = {http://dx.doi.org/10.2370/9783844068870}
}
Abstract: This paper presents a framework to develop and investigate location-based Virtual Reality (VR) applications. We demonstrate our framework by introducing a novel type of VR museum, designed to support a large number of simultaneous co-located users. These visitors are walking in a hangar-scale tracking zone (600 m2), while sharing a ten times bigger virtual space (7000 m2). Co-located VR applications like this one are opening novel VR perspectives. However, sharing a limitless virtual world using a large, but limited, tracking space is also raising numerous challenges: from financial considerations and technical implementation to interactions and evaluations (e.g., user’s representation, navigation, health & safety, monitoring). How to design, develop and evaluate such a VR system is still an open question. Here, we describe a fully implemented framework with its specific features and performance optimizations. We also illustrate our framework’s viability with a first VR application and discuss its potential benefits for education and future evaluation.
Yann Glémarec, Anne-Gwenn Bosser, Jean-Luc Lugrin, Mathieu Chollet, Cédric Buche, Maximilian Landeck, Marc Erich Latoschik,
A Scalability Benchmark for a Virtual Audience
Perception Model in Virtual Reality
, In
Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology
, Vol.
VRST'19
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{glemarec2019scalability,
title = {A Scalability Benchmark for a Virtual Audience
Perception Model in Virtual Reality},
author = {Glémarec, Yann and Bosser, Anne-Gwenn and Lugrin, Jean-Luc and Chollet, Mathieu and Buche, Cédric and Landeck, Maximilian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
year = {2019},
volume = {VRST'19},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-atmo-benchmarking-preprint.pdf}
}
Daniel Roth, Larissa Brübach, Franziska Westermeier, Christian Schell, Tobias Feigl, Marc Erich Latoschik,
A Social Interaction Interface Supporting Affective
Augmentation Based on Neuronal Data
, In
Symposium on Spatial User Interaction (SUI '19), October 19--20, 2019, New Orleans, LA, USA
, Vol.
SUI '19
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2019toappearsocial,
title = {A Social Interaction Interface Supporting Affective
Augmentation Based on Neuronal Data},
author = {Roth, Daniel and Brübach, Larissa and Westermeier, Franziska and Schell, Christian and Feigl, Tobias and Latoschik, Marc Erich},
booktitle = {Symposium on Spatial User Interaction (SUI '19), October 19--20, 2019, New Orleans, LA, USA},
year = {2019},
volume = {SUI '19},
url = {},
doi = {10.1145/3357251.3360018}
}
Abstract: In this demonstration we present a prototype for an avatar-mediated
social interaction interface that supports the replication of head-
and eye movement in distributed virtual environments. In addition
to the retargeting of these natural behaviors, the system is capable
of augmenting the interaction based on the visual presentation of
affective states. We derive those states using neuronal data captured
by electroencephalographic (EEG) sensing in combination with a
machine learning driven classification of emotional states.
Daniel Roth, Sebastian von Mammen, Julian Keil, Manuel Schildknecht, Marc Erich Latoschik,
Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief
, In
2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)
, pp. 1-4
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{Roth:2019ac,
title = {Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief},
author = {Roth, Daniel and Sebastian von Mammen, and Keil, Julian and Schildknecht, Manuel and Latoschik, Marc Erich},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
year = {2019},
pages = {1--4},
url = {}
}
Daniel Roth, S. von Mammen, Julian Keil, Manuel Schildknecht, Marc Erich Latoschik,
Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief
, In
2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)
, pp. 1-4
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{Roth:2019aa,
title = {Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief},
author = {Roth, Daniel and S. von Mammen, and Keil, Julian and Schildknecht, Manuel and Latoschik, Marc Erich},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
year = {2019},
pages = {1--4},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Roth2019aa.pdf}
}
Daniel Roth, Jan-Philipp Stauffert, Marc Erich Latoschik,
Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality
, In
VR Developer Gems
William R. Sherman (Ed.),
, Vol.
1
, pp. 321-348
.
Springer US
, 2019.
[BibTeX]
[Download]
[BibSonomy]
@inbook{roth2019avatar,
title = {Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality},
author = {Roth, Daniel and Stauffert, Jan-Philipp and Latoschik, Marc Erich},
editor = {Sherman, William R.},
booktitle = {VR Developer Gems},
year = {2019},
volume = {1},
pages = {321-348},
publisher = {Springer US},
url = {}
}
Daniel Roth, Franziska Westermeier, Larissa Brübach, Tobias Feigl, Christian Schell, Marc Erich Latoschik,
Brain 2 Communicate: EEG-based Affect Recognition to Augment Virtual Social Interactions
, In
Mensch und Computer 2019 - Workshopband
.
Bonn
:
Gesellschaft für Informatik e.V.
, 2019.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@conference{roth2019toappearbrain,
title = {Brain 2 Communicate: EEG-based Affect Recognition to Augment Virtual Social Interactions},
author = {Roth, Daniel and Westermeier, Franziska and Brübach, Larissa and Feigl, Tobias and Schell, Christian and Latoschik, Marc Erich},
booktitle = {Mensch und Computer 2019 - Workshopband},
year = {2019},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
url = {https://dl.gi.de/handle/20.500.12116/25205},
doi = {10.18420/muc2019-ws-571}
}
Stephan Hertweck, Desirée Weber, Hisham Alwanni, Fabian Unruh, Martin Fischbach, Marc Erich Latoschik, Tonio Ball,
Brain Activity in Virtual Reality: Assessing Signal Quality of
High-Resolution EEG While Using Head-Mounted Displays
, In
Proceedings of the 26th IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 970-971
.
IEEE
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{hertweck2019brain,
title = {Brain Activity in Virtual Reality: Assessing Signal Quality of
High-Resolution EEG While Using Head-Mounted Displays},
author = {Hertweck, Stephan and Weber, Desirée and Alwanni, Hisham and Unruh, Fabian and Fischbach, Martin and Latoschik, Marc Erich and Ball, Tonio},
booktitle = {Proceedings of the 26th IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
year = {2019},
pages = {970-971},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-brain-activity-vr-preprint.pdf}
}
Abstract: Biometric measures such as the electroencephalogram (EEG) promise to become viable alternatives to subjective questionnaire ratings for the evaluation of psychophysical effects associated with Virtual Reality (VR) systems, as they provide objective and continuous measurements without breaking the exposure. The extent to which the EEG signal can be disturbed by the presence of VR sys- tems, however, has been barely investigated. This study outlines how to evaluate the compatibility of a given EEG-VR setup on the example of two commercial head-mounted displays (HMDs), the Oculus Rift and the HTC Vive Pro. We use a novel experimental protocol to compare the spectral composition between conditions with and without an HMD present during an eyes-open vs. eyes-closed task. We found general artifacts at the line hum of 50 Hz, and additional HMD refresh rate artifacts (90 Hz) for the Oculus rift exclusively. Frequency components typically most interesting to non-invasive EEG research and applications (<50 Hz), however, remained largely unaffected. We observed similar topographies of visually-induced modulation of alpha band power for both HMD conditions in all subjects. Hence, the study introduces a necessary validation test for HMDs in combination with EEG and further promotes EEG as a potential biometric measurement method for psychophysical effects in VR systems.
Doris Aschenbrenner, Florian Leutert, Argun Cencen, Jouke Verlinden, Klaus Schilling, Marc Erich Latoschik, Stephan Lukosch,
Comparing Human Factors for Augmented Reality supported Single and Cooperative Repair Operations of Industrial Robots
, In
Frontiers in Robotics and AI
, Vol.
6
, p. 37
.
Frontiers
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{aschenbrenner2019comparing,
title = {Comparing Human Factors for Augmented Reality supported Single and Cooperative Repair Operations of Industrial Robots},
author = {Aschenbrenner, Doris and Leutert, Florian and Cencen, Argun and Verlinden, Jouke and Schilling, Klaus and Latoschik, Marc Erich and Lukosch, Stephan},
journal = {Frontiers in Robotics and AI},
year = {2019},
volume = {6},
pages = {37},
publisher = {Frontiers},
url = {https://www.frontiersin.org/articles/10.3389/frobt.2019.00037/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Robotics_and_AI&id=428452}
}
Abstract: In order to support the decision-making process of industry on how to implement Augmented Reality (AR) in production, this article wants to provide guidance through a set of comparative user studies. The results are obtained from the feedback of 160 participants who performed the same repair task on a switch cabinet of an industrial robot. The studies compare several AR instruction applications on different display devices (head-mounted display, handheld tablet PC and projection-based spatial AR) with baseline conditions (paper instructions and phone support), both in a single-user and a collaborative setting. Next to insights on the performance of the individual device types for the single mode operation, the study is able to show significant indications on AR techniques are being especially helpful in a collaborative setting.
Jean-Luc Lugrin, Andreas Juchno, Philipp Schaper, Maximilian Landeck, Marc Erich Latoschik,
Drone-Steering: A Novel VR Traveling Technique
, In
Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology
Proceedings of the 25th ACM Conference on Virtual Reality Software, Technology (Eds.),
, Vol.
VRST'19
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2019dronesteering,
title = {Drone-Steering: A Novel VR Traveling Technique},
author = {Lugrin, Jean-Luc and Juchno, Andreas and Schaper, Philipp and Landeck, Maximilian and Latoschik, Marc Erich},
editor = {of the 25th ACM Conference on Virtual Reality Software, Proceedings and Technology, },
booktitle = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
year = {2019},
volume = {VRST'19},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-drone-steering-preprint.pdf}
}
Jean-Luc Lugrin, Fabian Unruh, Maximilian Landeck, Yoan Lamour, Marc Erich Latoschik, Kai Vogeley, Marc Wittmann,
Experiencing Waiting Time in Virtual Reality
, In
Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2019experiencing,
title = {Experiencing Waiting Time in Virtual Reality},
author = {Lugrin, Jean-Luc and Unruh, Fabian and Landeck, Maximilian and Lamour, Yoan and Latoschik, Marc Erich and Vogeley, Kai and Wittmann, Marc},
booktitle = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
year = {2019},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-experiencing-waiting-time-preprint.pdf}
}
Sophia C Steinhaeusser, Anna Riedmann, Max Haller, Sebastian Oberdörfer, Kristina Bucher, Marc Erich Latoschik,
Fancy Fruits - An Augmented Reality Application
for Special Needs Education
, In
Proceedings of the 11th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2019)
, pp. 1-4
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{steinhaeusser2019fancy,
title = {Fancy Fruits - An Augmented Reality Application
for Special Needs Education},
author = {Steinhaeusser, Sophia C and Riedmann, Anna and Haller, Max and Oberdörfer, Sebastian and Bucher, Kristina and Latoschik, Marc Erich},
booktitle = {Proceedings of the 11th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2019)},
year = {2019},
pages = {1-4},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-vsgames-fancy-fruits-preprint.pdf},
doi = {10.1109/VS-Games.2019.8864547}
}
Abstract: Augmented Reality (AR) allows for a connection between real and virtual worlds, thus providing a high potential for Special Needs Education (SNE). We developed an educational application called Fancy Fruits to teach disabled children the components of regional fruits and vegetables. The app includes marker-based AR elements connecting the real situation with virtual information. To evaluate the application, a field study was conducted. Eleven children with mental disabilities took part in the study. The results show a high enjoyment of the participants. The study also validated the app's child-friendly design.
Florian Kern, Carla Winter, Dominik Gall, Ivo Käthner, Paul Pauli, Marc Erich Latoschik,
Immersive Virtual Reality and Gamification Within Procedurally Generated Environments to Increase Motivation During Gait Rehabilitation
, In
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 500-509
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kern2019immersive,
title = {Immersive Virtual Reality and Gamification Within Procedurally Generated Environments to Increase Motivation During Gait Rehabilitation},
author = {Kern, Florian and Winter, Carla and Gall, Dominik and Käthner, Ivo and Pauli, Paul and Latoschik, Marc Erich},
booktitle = {2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
year = {2019},
pages = {500-509},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-homecoming.pdf},
doi = {10.1109/VR.2019.8797828}
}
Abstract: Virtual Reality (VR) technology offers promising opportunities to improve traditional treadmill-based rehabilitation programs. We present an immersive VR rehabilitation system that includes a head-mounted display and motion sensors. The application is designed to promote the experience of relatedness, autonomy, and competence. The application uses procedural content generation to generate diverse landscapes. We evaluated the effect of the immersive rehabilitation system on motivation and affect. We conducted a repeated measures study with 36 healthy participants to compare the immersive program to a traditional rehabilitation program. Participants reported significant greater enjoyment, felt more competent and experienced higher decision freedom and meaningfulness in the immersive VR gait training compared to the traditional training. They experienced significantly lower physical demand, simulator sickness, and state anxiety, and felt less pressured while still perceiving a higher personal performance. We derive three design implications for future applications in gait rehabilitation: Immersive VR provides a promising augmentation for gait rehabilitation. Gamification features provide a design guideline for content creation in gait rehabilitation. Relatedness and autonomy provide critical content features in gait rehabilitation.
Johann Schmitt, Jean-Luc Lugrin, and Wienrich Carolin, Marc Erich Latoschik,
Investigating Gesture-based Commands for First-Person Shooter Games in Virtual Reality
, In
Proceedings of User-embodied Interaction in Virtual Reality, Mensch und Computer 2019
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{LugrinGesture2019,
title = {Investigating Gesture-based Commands for First-Person Shooter Games in Virtual Reality},
author = {Schmitt, Johann and Lugrin, Jean-Luc and and Wienrich Carolin, and Latoschik, Marc Erich},
booktitle = {Proceedings of User-embodied Interaction in Virtual Reality, Mensch und Computer 2019},
year = {2019},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-muc-uivr-workshop-Investigating-gesture-based-commands-for-first-person-shooter-games-in-vr.pdf}
}
Sebastian Oberdörfer, Marc Erich Latoschik,
Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge Learning in Desktop-3D and VR
, In
International Journal of Computer Games Technology
, Vol.
2019
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{oberdorfer2019knowledge,
title = {Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge Learning in Desktop-3D and VR},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
journal = {International Journal of Computer Games Technology},
year = {2019},
volume = {2019},
url = {https://www.hindawi.com/journals/ijcgt/2019/7626349/},
doi = {10.1155/2019/7626349}
}
Abstract: Affine Transformations (ATs) are a complex and abstract learning content. Encoding the AT knowledge in GameMechanics (GMs) achieves a repetitive knowledge application and audiovisual demonstration. Playing a serious game providing these GMs leads to motivating and effective knowledge learning. Using immersive Virtual Reality (VR) has the potential to even further increase the serious game’s learning outcome and learning quality.This paper compares the effectiveness and efficiency of desktop-3D and VR in respect to the achieved learning outcome. Also, the present study analyzes the effectiveness of an enhanced audiovisual knowledge encoding and the provision of a debriefing system. The results validate the effectiveness of the knowledge encoding in GMs to achieve knowledge learning. The study also indicates that VR is beneficial for the overall learning quality and that an enhanced audiovisual encoding has only a limited effect on the learning outcome.
Marc Erich Latoschik, Florian Kern, Jan-Philipp Stauffert, Andrea Bartl, Mario Botsch, Jean-Luc Lugrin,
Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
25
(
5)
, pp. 2134-2144
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{latoschik2019alone,
title = {Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality},
author = {Latoschik, Marc Erich and Kern, Florian and Stauffert, Jan-Philipp and Bartl, Andrea and Botsch, Mario and Lugrin, Jean-Luc},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2019},
volume = {25},
number = {5},
pages = {2134-2144},
url = {https://ieeexplore.ieee.org/document/8643417},
doi = {10.1109/TVCG.2019.2899250}
}
Abstract: This article investigates performance and user experience in Social Virtual Reality (SVR) targeting distributed, embodied, and immersive, face-to-face encounters. We demonstrate the close relationship between scalability, reproduction accuracy, and the resulting performance characteristics, as well as the impact of these characteristics on users co-located with larger groups of embodied virtual others. System scalability provides a variable number of co-located avatars and AI-controlled agents with a variety of different appearances, including realistic-looking virtual humans generated from photogrammetry scans. The article reports on how to meet the requirements of embodied SVR with today\u0027s technical off-the-shelf solutions and what to expect regarding features, performance, and potential limitations. Special care has been taken to achieve low latencies and sufficient frame rates necessary for reliable communication of embodied social signals. We propose a hybrid evaluation approach which coherently relates results from technical benchmarks to subjective ratings and which confirms required performance characteristics for the target scenario of larger distributed groups. A user-study reveals positive effects of an increasing number of co-located social companions on the quality of experience of virtual worlds, i.e., on presence, possibility of interaction, and co-presence. It also shows that variety in avatar/agent appearance might increase eeriness but might also stimulate an increased interest of participants about the environment.
Daniel Roth, Carola Bloch, Josephine Schmitt, Lena Frischlich, Marc Erich Latoschik, Gary Bente,
Perceived Authenticity, Empathy, and Pro-social Intentions Evoked Through Avatar-mediated Self-disclosures
, In
Proceedings of Mensch Und Computer 2019
, pp. 21-30
.
New York, NY, USA
:
ACM
, 2019.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2019toappearperceived,
title = {Perceived Authenticity, Empathy, and Pro-social Intentions Evoked Through Avatar-mediated Self-disclosures},
author = {Roth, Daniel and Bloch, Carola and Schmitt, Josephine and Frischlich, Lena and Latoschik, Marc Erich and Bente, Gary},
booktitle = {Proceedings of Mensch Und Computer 2019},
year = {2019},
pages = {21--30},
publisher = {ACM},
address = {New York, NY, USA},
url = {http://doi.acm.org/10.1145/3340764.3340797},
doi = {10.1145/3340764.3340797}
}
Negin Hamzeheinejad, Daniel Roth, Daniel Götz, Franz Weilbach, Marc Erich Latoschik,
Physiological Effectivity and User Experience of Immersive Gait Rehabilitation
, In
The First IEEE VR Workshop on Applied VR for Enhanced Healthcare (AVEH)
, pp. 1421-1429
.
IEEE
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{hamzeheinejad2019physiological,
title = {Physiological Effectivity and User Experience of Immersive Gait Rehabilitation},
author = {Hamzeheinejad, Negin and Roth, Daniel and Götz, Daniel and Weilbach, Franz and Latoschik, Marc Erich},
booktitle = {The First IEEE VR Workshop on Applied VR for Enhanced Healthcare (AVEH)},
year = {2019},
pages = {1421-1429},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-workshop-vr-gait-preprint.pdf},
doi = {10.1109/VR.2019.8797763}
}
Abstract: Gait impairments from neurological injuries require repeated and exhaustive physical exercises for rehabilitation. Prolonged physical training in clinical environments can easily become frustrating and de-motivating for various reasons which in turn risks to decrease efficiency during the healing process. This paper introduces an immersive VR system for gait rehabilitation which targets user experience and increase of motivation while evoking comparable physiological responses needed for successful training effects. The system provides a virtual environment consisting of open fields, forest, mountains, waterfalls, animals, and a beach for inspiring strolls and is able to include a virtual trainer as a companion during the walks. We evaluated the ecological validity of the system with healthy subjects before performing the clinical trial. We assessed the system\u0027s target qualities with a longitudinal study with 45 healthy participants in three consecutive days in comparison to a baseline non-VR condition. The system was able to evoke similar physiological responses. The workload was increased for the VR condition but the system also elicited a higher enjoyment and motivation which was the main goal. The latter benefits slightly decreased over time (as did workload) while they were still higher than in the non-VR condition. The virtual trainer did not show to be beneficial, the corresponding implications are discussed. Overall, the approach shows promising results which renders the system a viable alternative for the given use case while it motivates interesting direction for future work.
Sebastian Oberdörfer, Marc Erich Latoschik,
Predicting Learning Effects of Computer Games Using the Gamified Knowledge Encoding Model
, In
Entertainment Computing
Matthias Rauterberg, Fotis Liarokapis (Eds.),
, Vol.
32
, p. 100315
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{oberdorfer2019predicting,
title = {Predicting Learning Effects of Computer Games Using the Gamified Knowledge Encoding Model},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
editor = {Rauterberg, Matthias and Liarokapis, Fotis},
journal = {Entertainment Computing},
year = {2019},
volume = {32},
pages = {100315},
url = {https://www.sciencedirect.com/science/article/abs/pii/S1875952119300059},
doi = {10.1016/j.entcom.2019.100315}
}
Abstract: Game mechanics encode a computer game’s underlying principles as their internal rules. These game rules consist of information relevant to a specific learning content in the case of a serious game. This paper describes an approach to predict the learning effect of computer games by analyzing the structure of the provided game mechanics. In particular, we utilize the Gamified Knowledge Encoding model to predict the learning effects of playing the computer game Kerbal Space Program (KSP). We tested the correctness of the prediction in a user study evaluating the learning effects of playing KSP. Participants achieved a significant increase in knowledge about orbital mechanics during their first gameplay hours. In the second phase of the study, we assessed KSP’s applicability as an educational tool and compared it to a traditional learning method in respect to the learning outcome. The results indicate a highly motivating and effective knowledge learning. Also, participants used KSP to validate complex theoretical spaceflight concepts.
Tobias Feigl, Daniel Roth, Stefan Gradl, Markus Gerhard Wirth, Michael Philippsen, Marc Erich Latoschik, Bjoern Eskofier, Christopher Mutschler,
Sick Moves! Motion Parameters as Indicators of Simulator Sickness
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
25
(
11)
, pp. 3146-3157
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{feigl2019toappearmoves,
title = {Sick Moves! Motion Parameters as Indicators of Simulator Sickness},
author = {Feigl, Tobias and Roth, Daniel and Gradl, Stefan and Wirth, Markus Gerhard and Philippsen, Michael and Latoschik, Marc Erich and Eskofier, Bjoern and Mutschler, Christopher},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2019},
volume = {25},
number = {11},
pages = {3146-3157},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-sick-moves-tvcg-preprint.pdf},
doi = {10.1109/TVCG.2019.2932224}
}
Abstract: We explore motion parameters, more specifically gait parameters, asan objective indicator to assess simulator sickness in Virtual Reality(VR). We discuss the potential relationships between simulator sick-ness, immersion, and presence. We used two different camera pose(position and orientation) estimation methods for the evaluation ofmotion tasks in a large-scale VR environment: a simple model andan optimized model that allows for a more accurate and natural map-ping of human senses. Participants performed multiple motion tasks(walking, balancing, running) in three conditions: a physical realitybaseline condition, a VR condition with the simple model, and a VRcondition with the optimized model. We compared these conditionswith regard to the resulting sickness and gait, as well as the perceivedpresence in the VR conditions. The subjective measures confirmedthat the optimized pose estimation model reduces simulator sick-ness and increases the perceived presence. The results further showthat both models affect the gait parameters and simulator sickness,which is why we further investigated a classification approach thatdeals with non-linear correlation dependencies between gait param-eters and simulator sickness. We argue that our approach could beused to assess and predict simulator sickness based on human gaitparameters and we provide implications for future research
Jean-Luc Lugrin, Maximilian Landeck, Marc Erich Latoschik,
Simulated Reference Frame Effects on Steering, Jumping and Sliding
, In
Proceedings of the 26th IEEE Virtual Reality (VR) conference
, pp. 1062-1063
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2019simulated,
title = {Simulated Reference Frame Effects on Steering, Jumping and Sliding},
author = {Lugrin, Jean-Luc and Landeck, Maximilian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 26th IEEE Virtual Reality (VR) conference},
year = {2019},
pages = {1062-1063},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-simulated-frame-effect-vr-preprint.pdf}
}
Daniel Roth, Gary Bente, Peter Kullmann, David Mal, Christian Felix Purps, Kai Vogeley, Marc Erich Latoschik,
Technologies for Social Augmentations in User-Embodied Virtual Reality
, In
25th ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 1-12
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@conference{roth2019technologies,
title = {Technologies for Social Augmentations in User-Embodied Virtual Reality},
author = {Roth, Daniel and Bente, Gary and Kullmann, Peter and Mal, David and Purps, Christian Felix and Vogeley, Kai and Latoschik, Marc Erich},
booktitle = {25th ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2019},
pages = {1-12},
url = {https://dl.acm.org/doi/pdf/10.1145/3359996.3364269},
doi = {https://doi.org/10.1145/3359996.3364269}
}
Abstract: Technologies for Virtual, Mixed, and Augmented Reality (VR, MR, and AR) allow to artificially augment social interactions and thus to go beyond what is possible in real life. Motivations for the use of social augmentations are manifold, for example, to synthesize behavior when sensory input is missing, to provide additional affordances
in shared environments, or to support inclusion and training of individuals with social communication disorders. We review and categorize augmentation approaches and propose a software architecture based on four data layers. Three components further handle the status analysis, the modification, and the blending of behaviors. We present a prototype (injectX) that supports behavior tracking (body motion, eye gaze, and facial expressions from the lower face), status analysis, decision-making, augmentation, and behavior blending in immersive interactions. Along with a critical reflection, we consider further technical and ethical aspects.
David Heidrich, Sebastian Oberdörfer, Marc Erich Latoschik,
The Effects of Immersion on Harm-Inducing Factors in Virtual Slot Machines
, In
Proceedings of the 26th IEEE Virtual Reality (VR) conference
, pp. 793-801
.
IEEE
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{heidrichtobepublishedeffects,
title = {The Effects of Immersion on Harm-Inducing Factors in Virtual Slot Machines},
author = {Heidrich, David and Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 26th IEEE Virtual Reality (VR) conference},
year = {2019},
pages = {793-801},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-gambling-vr-preprint.pdf},
doi = {10.1109/VR.2019.8798021}
}
Abstract: Slot machines are one of the most played games by pathological gamblers. New technologies, e.g. immersive Virtual Reality (VR), offer more possibilities to exploit erroneous beliefs in the context of gambling. However, the risk potential of VR-based gambling has not been researched, yet. A higher immersion might increase harmful aspects, thus making VR realizations more dangerous. Measuring harm-inducing factors reveals the risk potential of virtual gambling. In a user study, we analyze a slot machine realized as a desktop 3D and as an immersive VR version. Both versions are compared in respect to effects on dissociation, urge to gamble, dark flow, and illusion of control. Our study shows significantly higher values of dissociation, dark flow, and urge to gamble in the VR version. Presence significantly correlates with all measured harm-inducing factors. We demonstrate that VR-based gambling has a higher risk potential. This creates the importance of regulating VR-based gambling.
Martin Mišiak, Niko Wissmann, Arnulph Fuhrmann, Marc Erich Latoschik,
The Impact of Stereo Rendering on the Perception of Normal Mapped Geometry in Virtual Reality
, In
Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology
, pp. 92:1-92:2
.
New York, NY, USA
:
Association for Computing Machinery
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{misiak2019impact,
title = {The Impact of Stereo Rendering on the Perception of Normal Mapped Geometry in Virtual Reality},
author = {Mišiak, Martin and Wissmann, Niko and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology},
year = {2019},
pages = {92:1-92:2},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-normal-maps-preprint.pdf},
doi = {10.1145/3359996.3364811}
}
Abstract: This paper investigates the effects of normal mapping on the perception of geometric depth between stereoscopic and non-stereoscopic views. Results show, that in a head-tracked environment, the addition of binocular disparity has no impact on the error rate in the detection of normal-mapped geometry. It does however significantly shorten the detection time.
Jean-Luc Lugrin, Marc Erich Latoschik, Yann Glémarec, Anne-Gween Bosser, Mathieu Chollet, Birgit Lugrin,
Towards Narrative-Driven Atmosphere for Virtual Classroom
, In
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
, pp. 1-6
.
2019.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2019towards,
title = {Towards Narrative-Driven Atmosphere for Virtual Classroom},
author = {Lugrin, Jean-Luc and Latoschik, Marc Erich and Glémarec, Yann and Bosser, Anne-Gween and Chollet, Mathieu and Lugrin, Birgit},
booktitle = {Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems},
year = {2019},
pages = {1-6},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-chi-lbw-narrative-driven-atmosphere-preprint.pdf}
}
Sebastian Oberdörfer, David Heidrich, Marc Erich Latoschik,
Usability of Gamified Knowledge Learning in VR and Desktop-3D
, In
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
, pp. 1-13
.
New York, NY, USA
:
Association for Computing Machinery
, 2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfertobepublishedusability,
title = {Usability of Gamified Knowledge Learning in VR and Desktop-3D},
author = {Oberdörfer, Sebastian and Heidrich, David and Latoschik, Marc Erich},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
year = {2019},
pages = {1-13},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-chi-getit-usability-preprint.pdf},
doi = {10.1145/3290605.3300405}
}
Abstract: Affine Transformations (ATs) often escape an intuitive approach due to their high complexity. Therefore, we developed GEtiT that directly encodes ATs in its game mechanics and scales the knowledge's level of abstraction. This results in an intuitive application as well as audiovisual presentation of ATs and hence in a knowledge learning. We also developed a specific Virtual Reality (VR) version to explore the effects of immersive VR on the learning outcomes. This paper presents our approach of directly encoding abstract knowledge in game mechanics, the conceptual design of GEtiT and its technical implementation. Both versions are compared in regard to their usability in a user study. The results show that both GEtiT versions induce a high degree of flow and elicit a good intuitive use. They validate the effectiveness of the design and the resulting knowledge application requirements. Participants favored GEtiT VR thus showing a potentially higher learning quality when using VR.
Sebastian von Mammen, Andreas Müller, Marc Erich Latoschik, Mario Botsch, Kirsten Brukamp, Carsten Schröder, Michael Wacker,
VIA VR: A Technology Platform for Virtual Adventures for Healthcare and Well-Being
, In
VS-Games
, pp. 1-2
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Mammen:2019aa,
title = {VIA VR: A Technology Platform for Virtual Adventures for Healthcare and Well-Being},
author = {von Mammen, Sebastian and Müller, Andreas and Latoschik, Marc Erich and Botsch, Mario and Brukamp, Kirsten and Schröder, Carsten and Wacker, Michael},
journal = {VS-Games},
year = {2019},
pages = {1-2},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-viavr-vsgames.pdf}
}
Abstract: To harness the potential of virtual reality (VR) in the healthcare sector, the expenditures for users such as clinics, doctors or health insurances, have to be reduced -- a requirement the technology platform VIA VR (an acronym from ``virtual reality adventures'' and VR) promises to fulfill by combining several key technologies to allow specialists from the healthcare sector to create high-impact VR adventures without the need for a background in programming or the design of virtual worlds. This paper fleshes out the concept of VIA VR, its technological pillars and the planned R&D agenda.
Carla Winter, Florian Kern, Ivo Käthner, Dominik Gall, Marc Erich Latoschik, Paul Pauli,
Virtuelle Realität als Ergänzung des Laufbandtrainings zur Rehabilitation von Gangstörungen bei Patienten mit Schlaganfall und Multipler Sklerose
, In
Ethik in der Medizin
, Vol.
14
(
15)
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@presentation{winter2019virtuelle,
title = {Virtuelle Realität als Ergänzung des Laufbandtrainings zur Rehabilitation von Gangstörungen bei Patienten mit Schlaganfall und Multipler Sklerose},
author = {Winter, Carla and Kern, Florian and Käthner, Ivo and Gall, Dominik and Latoschik, Marc Erich and Pauli, Paul},
journal = {Ethik in der Medizin},
year = {2019},
volume = {14},
number = {15},
url = {}
}
Abstract: Die Technik der virtuellen Realität (VR) bietet neue Behandlungsmöglichkeiten in der Rehabilitation neurologischer Erkrankungen. Vorherige Studien haben gezeigt, dass ein VR-basiertes Laufbandtraining bei Patienten mit Gangstörungen nicht nur den physischen, sondern auch den psychischen Therapieerfolg steigert und damit eine sinnvolle Ergänzung zum herkömmlichen Gangtraining darstellt.
In der vorliegenden Studie wurde untersucht, welche Auswirkungen die immersive Darbietung einer virtuellen Umgebung (über ein Head-Mounted-Display, HMD) gegenüber der semi-immersiven Darbietung der VR (über einen Flachbildmonitor) und dem herkömmlichen Laufbandtraining ohne VR hat.
Dazu durchliefen zunächst 36 gesunde Probanden und anschließend 14 MS- bzw. Schlaganfallpatienten mit Gangstörungen jeweils die drei verschiedenen Laufbandbedingungen (immersiv, semi-immersiv und ohne VR).
Die eingesetzte virtuelle Umgebung enthielt Gamification-Elemente zur Motivationssteigerung und wurde auf Grundlage der Selbstbestimmungstheorie nach Ryan und Deci implementiert. Die Studie mit gesunden Probanden diente dazu, die Gebrauchstauglichkeit (Usability) zu prüfen und technische Defizite aufzudecken. In einer Proof-of-Concept-Studie mit 14 MS- und Schlaganfallpatienten wurde anschließend anhand der Anwendung getestet, ob Patienten mit einem VR-gestützten Laufbandtraining im Rahmen der Behandlung ihrer Gangstörungen (EDSS < 6) ihre Lauffähigkeiten verbessern können. Primäres Outcome Maß war in beiden Studien die durchschnittliche Laufgeschwindigkeit innerhalb der einzelnen Bedingungen. Mittels standardisierter Fragebögen wurden zusätzlich Motivation, Benutzerfreundlichkeit, Präsenzerleben (Igroup Presence Questionnaire) und Nebenwirkungen des VR-Systems (Simulator Sickness Questionnaire) untersucht. Außerdem wurde die Präferenz der Studienteilnehmer in Bezug auf die drei Bedingungen erfragt.
Sowohl in der Studie mit gesunden Teilnehmern als auch in der Patientenstudie zeigte sich bei der HMD-Bedingung eine signifikant höhere durchschnittliche Laufgeschwindigkeit als beim Laufbandtraining ohne VR. Das Präsenzerleben war in beiden Studien in der HMD-Bedingung signifikant höher als in der Monitor-Bedingung. Darüber hinaus sind keine Nebenwirkungen durch die virtuelle Welt im Sinne einer Simulator Sickness aufgetreten. Die Studienteilnehmer hatten keine relevanten VR-bedingten Haltungsschwierigkeiten oder Probleme in Bezug auf das visuelle Display. Während die Motivation bei den gesunden Probanden nach dem HMD-Durchgang im Vergleich zu den anderen beiden Bedingungen höher ausfiel, wurden in der Patientenstudie keine signifikanten Unterschiede detektiert. Dennoch gaben die Patienten an, die virtuelle Welt in der HMD-Bedingung als motivierender empfunden zu haben als in der Monitor-Bedingung. Unter allen drei Bedingungen wurde das HMD-Laufbandtraining von 71 % (n = 14) der Patienten und 89 % (n = 36) der gesunden Versuchsteilnehmer präferiert. Ebenfalls 71 % der Patienten könnten sich vorstellen, das HMD-Laufbandtraining in Zukunft häufiger zu nutzen.
Nina Döllinger, Carolin Wienrich, Erik Wolf, Mario Botsch, Marc Erich Latoschik,
ViTraS - Virtual Reality Therapy by Stimulation of Modulated Body Image - Project Outline
, In
2019 Mensch und Computer - Workshopband
, pp. 606-611
.
2019.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2019toappearvitras,
title = {ViTraS - Virtual Reality Therapy by Stimulation of Modulated Body Image - Project Outline},
author = {Döllinger, Nina and Wienrich, Carolin and Wolf, Erik and Botsch, Mario and Latoschik, Marc Erich},
booktitle = {2019 Mensch und Computer - Workshopband},
year = {2019},
pages = {606-611},
url = {https://dl.gi.de/handle/20.500.12116/25250},
doi = {10.18420/muc2019-ws-633}
}
Abstract: In the recent decades, obesity has become one of the major public health issues and is associated with severe other diseases. Although current multidisciplinary therapy approaches already include behavioral therapy techniques, the oftentimes remaining lack of psychotherapeutic support after surgery leads to relapses and renewed weight gain. This paper presents an overview of the project ViTraS-Virtual Reality Therapy by Stimulation of Modulated BodyImage - that addresses these challenges by (i) Developing an integrative model predicting the influential paths of immersive media for an effective behavioral change; (ii) Developing an augmented reality (AR)-mirror system enabling an effective therapy on body self-perception of patients, and (iii)Developing a multi-user virtual reality (VR)-system supply-ing social support from therapists and other patients. The three components of the ViTraS projects are briefly introduced, as well as a first VR-based prototype of the mirror system.
2018
Florian Niebling, Ferdinand Maiwald, Kristina Barthel, Marc Erich Latoschik,
4D Augmented City Models, Photogrammetric Creation and Dissemination
, In
Digital Research and Education in Architectural Heritage
Sander Münster, Kristina Friedrichs, Florian Niebling, Agnieszka Seidel-Grzesińska (Eds.),
, pp. 196-212
.
Cham
:
Springer International Publishing
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10.1007/978-3-319-76992-9_12,
title = {4D Augmented City Models, Photogrammetric Creation and Dissemination},
author = {Niebling, Florian and Maiwald, Ferdinand and Barthel, Kristina and Latoschik, Marc Erich},
editor = {Münster, Sander and Friedrichs, Kristina and Niebling, Florian and Seidel-Grzesińska, Agnieszka},
booktitle = {Digital Research and Education in Architectural Heritage},
year = {2018},
pages = {196--212},
publisher = {Springer International Publishing},
address = {Cham},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-uhdl-augmented-city-models-niebling-preprint.pdf},
doi = {10.1007/978-3-319-76992-9_12}
}
Abstract: The availability of digital image repositories of historical photographs offers new possibilities to historians in their research. In addition to representing a large collection of data records themselves, image archives allow for new methods of research, from large-scale statistical analysis, to algorithmic generation of knowledge, such as historical 3D models, directly from these sources. In this paper, we explore methods to work with digital image libraries, from the creation of 3D or in extension time-annotated 4D models, to the eventual dissemination of research findings in teaching/learning scenarios. We review pedagogical approaches to reach different learning objectives, as well as methods that allow for the inclusion of historic city models employing Augmented Reality in mobile learning environments.
Jean-Luc Lugrin, Florian Kern, Ruben Schmidt, Constantin Kleinbeck, Daniel Roth, Christian Daxer, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik,
A Location-Based VR Museum
, In
Proceedings of the 10th IEEE International Conference on Virtual Worlds for Serious Applications (VS-Games)
IEEE (Ed.),
, pp. 1-2
.
IEEE
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{lugrin2018locationbased,
title = {A Location-Based VR Museum},
author = {Lugrin, Jean-Luc and Kern, Florian and Schmidt, Ruben and Kleinbeck, Constantin and Roth, Daniel and Daxer, Christian and Feigl, Tobias and Mutschler, Christopher and Latoschik, Marc Erich},
editor = {IEEE, },
booktitle = {Proceedings of the 10th IEEE International Conference on Virtual Worlds for Serious Applications (VS-Games)},
year = {2018},
pages = {1-2},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-lugrin-vrmuseum-vsgames.pdf},
doi = {10.1109/VS-Games.2018.8493404}
}
Abstract: This poster presents a novel type of Virtual Reality (VR) application for education and culture: a location-based VR Museum, which is a large-room scale multi-user multi-zone virtual museum. This VR museum was designed to support over 100 simultaneous users, walking in a large tracking system (600 m2) and sharing a ten times bigger virtual space (7000 m2) containing indoor and outdoor dinosaur exhibitions. This poster is giving an overview of the system and its main features as well as discussing its potential benefits and future evaluation.
Jean-Luc Lugrin, Maximilian Ertl, Philipp Krop, Richard Klüpfel, Sebastian Stierstorfer, Bianka Weisz, Maximilian Rück, Johann Schmitt, Nina Schmidt, Marc Erich Latoschik,
Any ”Body” There? - Avatar Visibility Effects in a Virtual Reality Game
, In
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 17-24
.
2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2018there,
title = {Any ”Body” There? - Avatar Visibility Effects in a Virtual Reality Game},
author = {Lugrin, Jean-Luc and Ertl, Maximilian and Krop, Philipp and Klüpfel, Richard and Stierstorfer, Sebastian and Weisz, Bianka and Rück, Maximilian and Schmitt, Johann and Schmidt, Nina and Latoschik, Marc Erich},
booktitle = {2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
year = {2018},
pages = {17-24},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-lugrin-igod-preprint.pdf}
}
Jean-Luc Lugrin, Fred Charles, Michael Habel, Henrik Dudaczy, Sebastian Oberdörfer, Jamie Matthews, Julie Porteous, Alice Wittmann, Christian Seufert, Silke Grafe, Marc Erich Latoschik,
Benchmark Framework for Virtual Students’ Behaviours
, In
Proceedings of the 17th Conference on Autonomous Agents and MultiAgent Systems
Proc. of the 17th International Conference on Autonomous Agents, Multiagent Systems (AAMAS 2018) (Eds.),
, p. 2236–2238
.
ACM
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2018benchmark,
title = {Benchmark Framework for Virtual Students’ Behaviours},
author = {Lugrin, Jean-Luc and Charles, Fred and Habel, Michael and Dudaczy, Henrik and Oberdörfer, Sebastian and Matthews, Jamie and Porteous, Julie and Wittmann, Alice and Seufert, Christian and Grafe, Silke and Latoschik, Marc Erich},
editor = {of the 17th International Conference on Autonomous Agents, Proc. and 2018), Multiagent Systems (AAMAS},
booktitle = {Proceedings of the 17th Conference on Autonomous Agents and MultiAgent Systems},
year = {2018},
pages = {2236–2238},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-acm-aamas-lugrin-preprint.pdf}
}
Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik,
Beyond Replication:
Augmenting Social Behaviors in Multi-User Virtual Realities
, In
Proceedings of the 25th IEEE Virtual Reality (VR) conference
, pp. 215-222
.
2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2018beyond,
title = {Beyond Replication:
Augmenting Social Behaviors in Multi-User Virtual Realities},
author = {Roth, Daniel and Kleinbeck, Constantin and Feigl, Tobias and Mutschler, Christopher and Latoschik, Marc Erich},
booktitle = {Proceedings of the 25th IEEE Virtual Reality (VR) conference},
year = {2018},
pages = {215-222},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-behav-augm-preprint.pdf}
}
Abstract: This paper presents a novel approach for the augmentation of social
behaviors in virtual reality (VR). We designed three visual
transformations for behavioral phenomena crucial to everyday social
interactions: eye contact, joint attention, and grouping. To
evaluate the approach, we let users interact socially in a virtual museum
using a large-scale multi-user tracking environment. Using
a between-subject design (N = 125) we formed groups of five participants.
Participants were represented as simplified avatars and
experienced the virtual museum simultaneously, either with or without
the augmentations. Our results indicate that our approach can
significantly increase social presence in multi-user environments
and that the augmented experience appears more thought-provoking.
Furthermore, the augmentations seem also to affect the actual behavior
of participants with regard to more eye contact and more focus
on avatars/objects in the scene. We interpret these findings as first
indicators for the potential of social augmentations to impact social
perception and behavior in VR.
Florian Niebling, Jonas Bruschke, Marc Erich Latoschik,
Browsing Spatial Photography for Dissemination of Cultural Heritage Research Results using Augmented Models
, In
Eurographics Workshop on Graphics and Cultural Heritage
Robert Sablatnig, Michael Wimmer (Eds.),
.
The Eurographics Association
, 2018.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{gch2018niebling,
title = {Browsing Spatial Photography for Dissemination of Cultural Heritage Research Results using Augmented Models},
author = {Niebling, Florian and Bruschke, Jonas and Latoschik, Marc Erich},
editor = {Sablatnig, Robert and Wimmer, Michael},
booktitle = {Eurographics Workshop on Graphics and Cultural Heritage},
year = {2018},
publisher = {The Eurographics Association},
url = {},
doi = {10.2312/gch.20181358}
}
Florian Niebling, Marc Erich Latoschik,
Browsing Spatial Photography using Augmented Models
, In
Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 47-48
.
IEEE, ACM
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{niebling2018ismar,
title = {Browsing Spatial Photography using Augmented Models},
author = {Niebling, Florian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2018},
pages = {47-48},
publisher = {IEEE, ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ismar-browsing-spatial-photography-niebling-preprint.pdf}
}
Marc Erich Latoschik, Sebastian von Mammen,
BSc Games Engineering
Björn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.),
, Vol.
Games Studieren - was, wie, wo?
, pp. 491-502
.
Transcript - Bild und Bit.
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inbook{Latoschik:2018aa,
title = {BSc Games Engineering},
author = {Latoschik, Marc Erich and von Mammen, Sebastian},
editor = {Bartholdy, Björn and Breitlauch, Linda and Czauderna, André and Freyermuth, Gundolf S},
year = {2018},
volume = {Games Studieren - was, wie, wo?},
pages = {491-502},
publisher = {Transcript - Bild und Bit.},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-GamesEngineering-Studiengang-Wuerzburg.pdf}
}
Marc Erich Latoschik, Sebastian von Mammen,
BSc Games Engineering
Björn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.),
, Vol.
Games Studieren - was, wie, wo?
, pp. 491-502
.
Transcript - Bild und Bit.
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inbook{vonmammen2018games,
title = {BSc Games Engineering},
author = {Latoschik, Marc Erich and von Mammen, Sebastian},
editor = {Bartholdy, Björn and Breitlauch, Linda and Czauderna, André and Freyermuth, Gundolf S},
year = {2018},
volume = {Games Studieren - was, wie, wo?},
pages = {491-502},
publisher = {Transcript - Bild und Bit.},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-GamesEngineering-Studiengang-Wuerzburg.pdf}
}
Doris Aschenbrenner, Michael Rojkov, Florian Leutert, Jouke Verlinden, Stephan Lukosch, Marc Erich Latoschik, Klaus Schilling,
Comparing Different Augmented Reality Support Applications for Cooperative Repair of an Industrial Robot
, In
Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 69-74
.
2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{aschenbrenner2018comparing,
title = {Comparing Different Augmented Reality Support Applications for Cooperative Repair of an Industrial Robot},
author = {Aschenbrenner, Doris and Rojkov, Michael and Leutert, Florian and Verlinden, Jouke and Lukosch, Stephan and Latoschik, Marc Erich and Schilling, Klaus},
booktitle = {Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2018},
pages = {69-74},
url = {}
}
Abstract: Digitization and the growing capabilities of data networks enable companies to perform tasks via remote support, which previously required service personnel to travel. But which mixed reality method leads to better results regarding human factors, grounding and per- formance criteria? This paper reports on a collaborative user study, in which a local worker is guided by a remote expert with the help of different augmented reality methods, specifically see-through HMD, spatial projection, and video-mixing tablet. The performed task is an controller exchange in a switch cabinet of an industrial robot, a task rather typical for failure detection within the field. Our study was conducted in collaboration with a technician school of which 50 technician apprentices participated in our study. Our results show clear advantages of using augmented reality (AR) versus traditional conditions (audio, video, screenshot) to enable remote support. It further gives significant indications for using a projection based AR method.
Sebastian Oberdörfer, Marc Erich Latoschik,
Effective Orbital Mechanics Knowledge Training Using Game Mechanics
, In
Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018)
, pp. 1-8
.
2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2018effective,
title = {Effective Orbital Mechanics Knowledge Training Using Game Mechanics},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018)},
year = {2018},
pages = {1-8},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vsgames-ksp-preprint.pdf},
doi = {10.1109/VS-Games.2018.8493417}
}
Abstract: Computer games consist of game mechanics (GMs) that encode a game's rules, principles and overall knowledge thus structuring the gameplay. These knowledge rules can also consist of information relevant to a specific learning content. This knowledge then is required and trained by periodically executing the GMs during the gameplay. Simultaneously, GMs demonstrate the encoded knowledge in an audiovisual way. Hence, GMs create learning affordances for the learning content thus requiring its application and informing about the underlying principles. However, it is still unclear how knowledge can directly be encoded and trained using GMs. Therefore, this paper analyzes the GMs used in the computer game Kerbal Space Program (KSP) to identify the encoded knowledge and to predict their training effects. Also, we report the results of a study testing the training effects of KSP when played as a regular game and when used as a specific training environment. The results indicate a highly motivating and effective knowledge training using the identified GMs.
Sebastian Oberdörfer, Marc Erich Latoschik,
Effectivity of Affine Transformation Knowledge Training Using Game Mechanics
, In
Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018)
, pp. 1-8
.
2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2018effectivity,
title = {Effectivity of Affine Transformation Knowledge Training Using Game Mechanics},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018)},
year = {2018},
pages = {1-8},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vsgames-getit1-preprint.pdf},
doi = {10.1109/VS-Games.2018.8493418}
}
Abstract: The Gamified Training Environment for Affine Transformation (GEtiT) was developed as a demonstrator for the Gamified Knowledge Encoding model (GKE). The GKE is a novel framework that defines knowledge training using game mechanics (GMs). It describes the process of directly encoding learning contents in GMs to allow for an engaging and effective transfer-oriented knowledge training. Overall, GEtiT is developed to facilitate the training process of the complex and abstract Affine Transformation (AT) knowledge. The complexity of the AT makes it hard to demonstrate this learning content thus learners frequently experience issues when trying to develop an understanding for its application. During the gameplay, the application of the AT's mathematical grounded aspects is required and information about the underlying principles are provided. In this article, a short overview over GEtiT's structure and the knowledge encoding process is given. Also, this article presents the results of a study measuring the training effectivity and motivational aspects of GEtiT. The results indicate a training outcome similar to a traditional paper-based training method but a higher motivation of the GEtiT players. Hence, GEtiT yields a higher learning quality.
Daniel Roth, Peter Kullmann, Gary Bente, Dominik Gall, Marc Erich Latoschik,
Effects of Hybrid and Synthetic Social Gaze
in Avatar-Mediated Interactions
, In
Adjunct Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 103-108
.
IEEE, ACM
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2018effects,
title = {Effects of Hybrid and Synthetic Social Gaze
in Avatar-Mediated Interactions},
author = {Roth, Daniel and Kullmann, Peter and Bente, Gary and Gall, Dominik and Latoschik, Marc Erich},
booktitle = {Adjunct Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2018},
pages = {103-108},
publisher = {IEEE, ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ismar-augmentedgaze-roth-preprint.pdf}
}
Abstract: Human gaze is a crucial element in social interactions and therefore
an important topic for social Augmented, Mixed, and Virtual Reality
(AR,MR,VR) applications. In this paper we systematically compare
four modes of gaze transmission: (1) natural gaze, (2) hybrid gaze,
which combines natural gaze transmission with a social gaze model,
(3) synthesized gaze, which combines a random gaze transmission
with a social gaze model, and (4) purely random gaze. Investigating
dyadic interactions, results show a linear trend for the perception
of virtual rapport, trust, and interpersonal attraction, suggesting
that these measures increase with higher naturalness and social
adequateness of the transmission mode. We further investigated the
perception of realism as well as the resulting gaze behavior of the
avatars and the human participants. We discuss these results and
their implications.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Effects of Latency Jitter on Simulator Sickness in a Search Task
, In
25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR)
, pp. 121-127
.
2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{stauffert2018effects,
title = {Effects of Latency Jitter on Simulator Sickness in a Search Task},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR)},
year = {2018},
pages = {121-127},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-latency-preprint.pdf}
}
Abstract: Low latency is a fundamental requirement for Virtual Reality (VR) systems to reduce the potential risks of cybersickness and to increase effectiveness, efficiency and user experience. In contrast to the effects of uniform latency degradation, the influence of latency jitter on user experience in VR is not well researched, although today\u0027s consumer VR systems are vulnerable in this respect. In this work we report on the impact of latency jitter on cybersickness in HMD-based VR environments. Test subjects are given a search task in Virtual Reality, provoking both head rotation and translation. One group experienced artificially added latency jitter in the tracking data of their head-mounted display. The introduced jitter pattern was a replication of a real-world latency behavior extracted and analyzed from an existing example VR-system. The effects of the introduced latency jitter were measured based on self-reports simulator sickness questionnaire (SSQ) and by taking physiological measurements. We found a significant increase in self-reported simulator sickness. We therefore argue that measure and control of latency based on average values taken at a few time intervals is not enough to assure a required timeliness behavior but that latency jitter needs to be considered when designing experiences for Virtual Reality.
Sebastian Oberdörfer, Martin Fischbach, Marc Erich Latoschik,
Effects of VE Transition Techniques on Presence, IVBO, Efficiency, and Naturalness
, In
Proceedings of the 6th Symposium on Spatial User Interaction (SUI '18)
, pp. 89-99
.
New York, NY, USA
:
Association for Computing Machinery
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2018effects,
title = {Effects of VE Transition Techniques on Presence, IVBO, Efficiency, and Naturalness},
author = {Oberdörfer, Sebastian and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 6th Symposium on Spatial User Interaction (SUI '18)},
year = {2018},
pages = {89-99},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-sui-comp-of-vr-transition-techniques-preprint.pdf},
doi = {10.1145/3267782.3267787}
}
Abstract: Several transition techniques (TTs) exist for Virtual Reality (VR) that allow users to travel to a new target location in the vicinity of their current position. To overcome a greater distance or even move to a different Virtual Environment (VE) other TTs are required that allow for an immediate, quick, and believable change of location. Such TTs are especially relevant for VR user studies and storytelling in VR, yet their effect on the experienced presence, illusion of virtual body ownership (IVBO), and naturalness as well as their efficiency is largely unexplored. In this paper we thus identify and compare three metaphors for transitioning between VEs with respect to those qualities: an in-VR head-mounted display metaphor, a turn around metaphor, and a simulated blink metaphor. Surprisingly, the results show that the tested metaphors did not affect the experienced presence and IVBO. This is especially important for researchers and game designers who want to build more natural VEs.
Daniel Roth, Josephine Schmitt, Carola Bloch, Lena Frischlich, Marc Erich Latoschik, Gary Bente,
Empathy for Avatars: The Influence of Perceived Authenticity on Empathy and Behavioral Intentions
, In
Presentation on the 68th Annual Conference of the International Communication Association (ICA), May 24-28 2018, Prague, Czech Republic
.
2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{roth2018empathy,
title = {Empathy for Avatars: The Influence of Perceived Authenticity on Empathy and Behavioral Intentions},
author = {Roth, Daniel and Schmitt, Josephine and Bloch, Carola and Frischlich, Lena and Latoschik, Marc Erich and Bente, Gary},
booktitle = {Presentation on the 68th Annual Conference of the International Communication Association (ICA), May 24-28 2018, Prague, Czech Republic},
year = {2018},
url = {}
}
Daniel Roth, David Mal, Ivan Polyschev, Maximilian Wiedemann, Christoph Klöffel, Christian Purps, Jens To, Marc Erich Latoschik,
Extended Abstract: Artificial Nonverbal Mimicry in Immersive Embodied Social Interactions: Meet the Mimicry Injector
, In
Presentation on the 68th Annual Conference of the International Communication Association (ICA), May 24-28 2018, Prague, Czech Republic
.
2018.
[BibTeX]
[Download]
[BibSonomy]
@article{roth2018extended,
title = {Extended Abstract: Artificial Nonverbal Mimicry in Immersive Embodied Social Interactions: Meet the Mimicry Injector},
author = {Roth, Daniel and Mal, David and Polyschev, Ivan and Wiedemann, Maximilian and Klöffel, Christoph and Purps, Christian and To, Jens and Latoschik, Marc Erich},
journal = {Presentation on the 68th Annual Conference of the International Communication Association (ICA), May 24-28 2018, Prague, Czech Republic},
year = {2018},
url = {}
}
Martin Fischbach, Michael Brandt, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik, Birgit Lugrin,
Follow the White Robot - A Role-Playing Game with a Robot Game Master
, In
17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2018)
, pp. 1812-1814
.
ACM
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach:2018ab,
title = {Follow the White Robot - A Role-Playing Game with a Robot Game Master},
author = {Fischbach, Martin and Brandt, Michael and Zimmerer, Chris and Lugrin, Jean-Luc and Latoschik, Marc Erich and Lugrin, Birgit},
booktitle = {17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2018)},
year = {2018},
pages = {1812-1814},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-aamas-demo-white-robot-camera-ready-v2-preprint.pdf}
}
Abstract: We describe a social robot acting as a game master in an interactive tabletop role-playing game. The Robot Game Master (RGM) takes on the role of different characters, which the human players meet during the adventure, as well as of the narrator. The demonstration presents a novel software and hardware platform that allows the robot to (1) proactively lead through the storyline and to (2) react to changes in the ongoing game in real-time, while (3) fostering players\u0027 collaborations.
Sebastian von Mammen, Andreas Knote, Daniel Roth, Marc Erich Latoschik,
Games Engineering. Wissenschaft mit, über und für Interaktive Systeme
Björn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.),
, Vol.
Games Studieren - was, wie, wo?
, pp. 269-318
.
Transcript - Bild und Bit.
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inbook{Mammen:2018aa,
title = {Games Engineering. Wissenschaft mit, über und für Interaktive Systeme},
author = {von Mammen, Sebastian and Knote, Andreas and Roth, Daniel and Latoschik, Marc Erich},
editor = {Bartholdy, Björn and Breitlauch, Linda and Czauderna, André and Freyermuth, Gundolf S},
year = {2018},
volume = {Games Studieren - was, wie, wo?},
pages = {269-318},
publisher = {Transcript - Bild und Bit.},
url = {}
}
Sebastian von Mammen, Andreas Knote, Daniel Roth, Marc Erich Latoschik,
Games Engineering. Wissenschaft mit, über und für Interaktive Systeme
Björn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.),
, Vol.
Games Studieren - was, wie, wo? Staatliche Studienangebote im Bereich digitaler Spiele
, pp. 269-318
.
Transcript - Bild und Bit
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inbook{Mammen:2018aa,
title = {Games Engineering. Wissenschaft mit, über und für Interaktive Systeme},
author = {von Mammen, Sebastian and Knote, Andreas and Roth, Daniel and Latoschik, Marc Erich},
editor = {Bartholdy, Björn and Breitlauch, Linda and Czauderna, André and Freyermuth, Gundolf S},
year = {2018},
volume = {Games Studieren - was, wie, wo? Staatliche Studienangebote im Bereich digitaler Spiele},
pages = {269--318},
publisher = {Transcript - Bild und Bit},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Mammen2018aa.pdf}
}
Sebastian Oberdörfer, Marc Erich Latoschik,
Gamified Knowledge Encoding: Knowledge Training Using Game Mechanics
, In
Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018)
, pp. 1-2
.
2018.
Best Poster Award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2018gamified,
title = {Gamified Knowledge Encoding: Knowledge Training Using Game Mechanics},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018)},
year = {2018},
pages = {1-2},
note = {Best Poster Award 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vsgames-gke-preprint.pdf},
doi = {10.1109/VS-Games.2018.8493425}
}
Abstract: Game mechanics (GMs) encode a game’s rules, underlying principles and overall knowledge. During the gameplay, players practice this knowledge due to repetition and compile mental models for it. Mental models allow for a training transfer from a training context to a different context. Hence, as GMs can encode any knowledge, they can also encode specific learning contents as their rules and be used for an effective transfer-oriented knowledge training. In this article, we propose the Gamified Knowledge Encoding model (GKE) that not only describes a direct knowledge encoding of a specific learning content in GMs, but also defines their training effects. Ultimately, the GKE can be used as an underlying guideline to develop well-tailored game-based training environments.
Negin Hamzeheinejad, Samantha Straka, Dominik Gall, Franz Weilbach, Marc Erich Latoschik,
Immersive Robot-Assisted Virtual Reality Therapy for Neurologically-Caused Gait Impairments
, In
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 565-566
.
IEEE
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{hamzeheinejad2018immersive,
title = {Immersive Robot-Assisted Virtual Reality Therapy for Neurologically-Caused Gait Impairments},
author = {Hamzeheinejad, Negin and Straka, Samantha and Gall, Dominik and Weilbach, Franz and Latoschik, Marc Erich},
journal = {2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
year = {2018},
pages = {565-566},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vr-poster-vr-gait.pdf},
doi = {10.1109/VR.2018.8446125}
}
Abstract: This paper presents an immersive Virtual Reality (VR) therapy system for gait rehabilitation after neurological impairments, e.g., caused by accidents or strokes: The system targets increase of patients\u0027 motivation to perform the repeated exercise by providing stimulating virtual exercise environments with the final goal to increase therapy efficiency and effectiveness. Instead of simply working out on immobile stationary devices, the system allows them to walk through and explore a stimulating virtual world. Patients are immersed in the virtual environments using a Head-Mounted Display (HMD). Walking patterns are captured by motion sensors attached to the patients\u0027 feet to synchronize locomotion speed between the real and the virtual world. A user-centered design process evaluated usability, user experience, and feasibility to confirm the overall goals of the system before any sensitive clinical trials with impaired patients can start. Overall, the results demonstrated an encouraging user experience and acceptance while it did not induce any unwanted side-effects, e.g., nausea or cyber-sickness.
Daniel Roth, David Mal, Christian Felix Purps, Peter Kullmann, Marc Erich Latoschik,
Injecting Nonverbal Mimicry with Hybrid Avatar-Agent
Technologies: A Naïve Approach
, In
Proceedings of the 6th ACM Symposium on Spatial User Interaction (SUI)
, pp. 69-73
.
ACM
, 2018.
Honorable mention award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2018injecting,
title = {Injecting Nonverbal Mimicry with Hybrid Avatar-Agent
Technologies: A Naïve Approach},
author = {Roth, Daniel and Mal, David and Purps, Christian Felix and Kullmann, Peter and Latoschik, Marc Erich},
booktitle = {Proceedings of the 6th ACM Symposium on Spatial User Interaction (SUI)},
year = {2018},
pages = {69-73},
publisher = {ACM},
note = {Honorable mention award 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-sui-mimicry-roth-preprint.pdf},
doi = {https://doi.org/10.1145/3267782.3267791}
}
Abstract: Humans communicate to a large degree through nonverbal behavior.
Nonverbal mimicry, i.e., the imitation of another’s behavior can
positively affect the social interactions. In virtual environments,
user behavior can be replicated to avatars, and agent behaviors can
be artificially constructed. By combining both, hybrid avatar-agent
technologies aim at actively mediating virtual communication to
foster interpersonal understanding and rapport.We present a naïve
prototype, the “Mimicry Injector”, that injects artificial mimicry
in real-time virtual interactions. In an evaluation study, two participants
were embodied in a Virtual Reality (VR) simulation, and
had to perform a negotiation task. Their virtual characters either a)
replicated only the original behavior or b) displayed the original
behavior plus induced mimicry. We found that most participants
did not detect the modification. However, the modification did not
have a significant impact on the perception of the communication.
Jean-Luc Lugrin, Henrik Dudaczy, Marc Erich Latoschik,
Low-Frequency Stress Elicitation for VR Training
, In
Proceedings of the 10th IEEE International Conference on Virtual Worlds for Serious Applications (VS-Games)
10th IEEE International Conference on Virtual Worlds, for Serious Applications (VS-Games) (Eds.),
, pp. 1-2
.
IEEE
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2018lowfrequency,
title = {Low-Frequency Stress Elicitation for VR Training},
author = {Lugrin, Jean-Luc and Dudaczy, Henrik and Latoschik, Marc Erich},
editor = {10th IEEE International Conference on Virtual Worlds, and for Serious Applications (VS-Games), },
booktitle = {Proceedings of the 10th IEEE International Conference on Virtual Worlds for Serious Applications (VS-Games)},
year = {2018},
pages = {1-2},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vsgame-lugrin-low-freqency-preprint.pdf}
}
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
, In
Multimodal Technologies and Interaction
, Vol.
2
(
4)
, p. 81ff.
.
MDPI
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{zimmerer:2018,
title = {Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
journal = {Multimodal Technologies and Interaction},
year = {2018},
volume = {2},
number = {4},
pages = {81ff.},
publisher = {MDPI},
url = {https://www.mdpi.com/2414-4088/2/4/81},
doi = {10.3390/mti2040081}
}
Abstract: Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Space Tentacles - Integrating Multimodal Input into a VR Adventure Game
, In
Proceedings of the 25th IEEE Virtual Reality (VR) conference
, pp. 745-746
.
IEEE
, 2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{zimmerer2018space,
title = {Space Tentacles - Integrating Multimodal Input into a VR Adventure Game},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 25th IEEE Virtual Reality (VR) conference},
year = {2018},
pages = {745-746},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-space-tentacle-preprint.pdf}
}
Abstract: Multimodal interfaces for Virtual Reality (VR), e.g., based on speech and gesture input/output (I/O), often exhibit complex system architectures. Tight couplings between the required I/O processing stages and the underlying scene representation and the simulator system’s flow-of-control tend to result in high development and maintainability costs. This paper presents a maintainable solution for realizing such interfaces by means of a cherry-picking approach. A reusable multimodal I/O processing platform is combined with the simulation and rendering capabilities of the Unity game engine, allowing to exploit the game engine’s superior API usability and tool support. The approach is illustrated based on the development of a multimodal VR adventure game called Space Tentacles.
Dominik Gall, Marc Erich Latoschik,
The Effect of Haptic Prediction Accuracy on Presence
, In
Proceedings of the 25th IEEE Virtual Reality (VR) conference
, pp. 73-80
.
2018.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{gall2018effect,
title = {The Effect of Haptic Prediction Accuracy on Presence},
author = {Gall, Dominik and Latoschik, Marc Erich},
booktitle = {Proceedings of the 25th IEEE Virtual Reality (VR) conference},
year = {2018},
pages = {73-80},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vr-haptics-presence.pdf}
}
Abstract: This paper reports on the effect of visually-anchored prediction accuracy of haptic information on the perceived presence of virtual environments. We designed an experiment which explicitly prevented confounding factors potentially introduced by virtual body ownership and/or agency. The experimental design consisted of two main conditions defining congruent vs incongruent visual and haptic cues. Presence was measured during as well as after exposure. A distance estimation task solely based on motor action and the visually-anchored spatial model of the environment was executed to control for perceptual binding. 56 healthy volunteers were randomly assigned to one of two groups in a single-blind mixed-group design study. The study revealed increased presence for high prediction accuracy and decreased presence for low prediction accuracy, while perceptual binding still occurred. The observed effect sizes were in the medium range. The results indicate a significant correlation between prediction accuracy of haptic information and the perceived realness and presence of a virtual environment which gives rise to a discussion about models for dissociative symptom derealisation.
Thomas Waltemate, Dominik Gall, Daniel Roth, Mario Botsch, Marc Erich Latoschik,
The Impact of Avatar Personalization and Immersion on Virtual Body Ownership, Presence, and Emotional Response
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
24
(
4)
, pp. 1643-1652
.
2018.
Best Journal Paper 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{waltemate2018impact,
title = {The Impact of Avatar Personalization and Immersion on Virtual Body Ownership, Presence, and Emotional Response},
author = {Waltemate, Thomas and Gall, Dominik and Roth, Daniel and Botsch, Mario and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2018},
volume = {24},
number = {4},
pages = {1643-1652},
note = {Best Journal Paper 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-avatar-personalization.pdf},
doi = {10.1109/TVCG.2018.2794629}
}
Abstract: This article reports the impact of the degree of personalization and individualization of users' avatars as well as the impact of the degree of immersion on typical psychophysical factors in embodied Virtual Environments. We investigated if and how virtual body ownership (including agency), presence, and emotional response are influenced depending on the specific look of users' avatars, which varied between (1) a generic hand-modeled version, (2) a generic scanned version, and (3) an individualized scanned version. The latter two were created using a state-of-the-art photogrammetry method providing a fast 3D-scan and post-process workflow. Users encountered their avatars in a virtual mirror metaphor using two VR setups that provided a varying degree of immersion, (a) a large screen surround projection (CAVE) and (b) a head-mounted display (HMD). We found several significant as well as a number of notable effects. First, personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts, even if the latter were generated by the same photogrammetry process and hence could be valued as equal in terms of the degree of realism and graphical quality. Second, the degree of immersion significantly increases the body ownership, agency, as well as the feeling of presence. These results substantiate the value of personalized avatars resembling users' real-world appearances as well as the value of the deployed scanning process to generate avatars for VR-setups where the effect strength might be substantial, e.g., in social Virtual Reality (VR) or in medical VR-based therapies relying on embodied interfaces. Additionally, our results also strengthen the value of fully immersive setups which, today, are accessibly for a variety of applications due to the widely available consumer HMDs.
Daniel Rapp, Florian Niebling, Marc Erich Latoschik,
The impact of Pokemon Go and why it’s not about Augmented Reality – Results from a Qualitative Survey
, In
Proceedings of the 10th IEEE International Conference on Virtual Worlds for Serious Applications (VS-Games)
10th IEEE International Conference on Virtual Worlds, for Serious Applications (VS-Games) (Eds.),
, pp. 1-2
.
IEEE
, 2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{rapp2018pokemon,
title = {The impact of Pokemon Go and why it’s not about Augmented Reality – Results from a Qualitative Survey},
author = {Rapp, Daniel and Niebling, Florian and Latoschik, Marc Erich},
editor = {10th IEEE International Conference on Virtual Worlds, and for Serious Applications (VS-Games), },
booktitle = {Proceedings of the 10th IEEE International Conference on Virtual Worlds for Serious Applications (VS-Games)},
year = {2018},
pages = {1-2},
publisher = {IEEE},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-vsgames-rapp-pokemon-preprint.pdf}
}
Jean-Luc Lugrin, Sebastian Oberdorfer, Marc Erich Latoschik, Alice Wittmann, Christian Seufert, Silke Grafe,
VR-Assisted vs Video-Assisted Teacher Training
, In
Proceedings of the 25th IEEE Virtual Reality (VR) conference
, pp. 625-626
.
2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2018vrassisted,
title = {VR-Assisted vs Video-Assisted Teacher Training},
author = {Lugrin, Jean-Luc and Oberdorfer, Sebastian and Latoschik, Marc Erich and Wittmann, Alice and Seufert, Christian and Grafe, Silke},
booktitle = {Proceedings of the 25th IEEE Virtual Reality (VR) conference},
year = {2018},
pages = {625-626},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-lugrin-vr-teacher-training-poster-preprint.pdf}
}
Daniel Roth, Marc Erich Latoschik, Carola Bloch, Gary Bente,
When Some Things are Missing: The Quality of Interpersonal Communication in Social Virtual Reality (presentation)
, Vol.
Presentation on the 68th Annual Conference of the International Communication Association (ICA), May 24-28 2018, Prague, Czech Republic
.
2018.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{roth2018things,
title = {When Some Things are Missing: The Quality of Interpersonal Communication in Social Virtual Reality (presentation)},
author = {Roth, Daniel and Latoschik, Marc Erich and Bloch, Carola and Bente, Gary},
year = {2018},
volume = {Presentation on the 68th Annual Conference of the International Communication Association (ICA), May 24-28 2018, Prague, Czech Republic},
url = {}
}
2017
Robert Tscharn, Marc Erich Latoschik, Diana Löffler, Jörn Hurtienne,
"Stop over There" – Natural Gesture and Speech
Interaction for Non-Critical Spontaneous Intervention in Autonomous Driving
, In
Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI)
, pp. 91-100
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{tscharn2017there,
title = {"Stop over There" – Natural Gesture and Speech
Interaction for Non-Critical Spontaneous Intervention in Autonomous Driving},
author = {Tscharn, Robert and Latoschik, Marc Erich and Löffler, Diana and Hurtienne, Jörn},
booktitle = {Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI)},
year = {2017},
pages = {91-100},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-icmi-stop-over-there-final-pre-print.pdf}
}
Abstract: We propose a novel multimodal intervention strategy for Non-Critical Spontaneous Situations (NCSSs) in autonomous driving. The strategy combines speech and deictic gestures to instruct the car about desired interventions which include spatial references to the current car’s and hence driver’s environment (e.g., “stop over pointing there” or “take pointing this parking lot”). Speech allows for specifying a large number of maneuvers and functions in the car (e.g., stop, park, etc.), whereas deictic gestures provide a natural way of indicating spatial discourse referents used in these interventions (e.g., near this tree, that parking lot, etc.). Hence, advantages of each modality are exploited. Our multimodal system also supports a semi-immersive Virtual Reality enhanced by Semantic Entities to realize and test the proposed NCSS intervention strategy. The evaluation confirmed our approach to be more natural and intuitive, and also less cognitively demanding com- pared to a combination of speech and touch, a combination that could be seen as a straight-forward alternative to our approach due to already existing and available in-car touch screens.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
A Latency and Latency Jitter Simulation Framework with OSVR
, In
10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), IEEE Computer Society, (2017)
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{stauffert2017latency,
title = {A Latency and Latency Jitter Simulation Framework with OSVR},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), IEEE Computer Society, (2017)},
year = {2017},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-searis-latency-preprint.pdf}
}
Abstract: Latency is a pressing problem in Virtual Reality (VR) applications. Low latencies are required for VR to reduce perceptual artifacts and cyber sickness. Latency jitter, i.e. variance in the pattern of latency, prevent coping mechanisms as users can\u0027t adapt.
Low latency is a fundamental timeliness requirement to reduce the potential risks of cyber sickness and to increase effectiveness, efficiency, and user experience of Virtual Reality Systems. The effects of uniform latency degradation based on mean or worst-case values are well researched. In contrast, the effects of latency jitter, the distribution pattern of latency changes over time has largely been ignored so far, although today\u0027s consumer VR systems are extremely vulnerable in this respect.
In this paper, we propose to create a model of latency and latency jitter with empirical distributions as well as a method of using those models to inject latency. The process of creating a latency model is demonstrated with an example of gathering and converting latency samples from an example application. We show how to simulate latency and motivate to use it in middleware to allow for less intrusive latency effect evaluations.
Daniel Roth, Jean-Luc Lugrin, Marc Erich Latoschik, Stephan Huber,
Alpha IVBO - Construction of a Scale
to Measure the Illusion of Virtual Body
Ownership
, In
Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems
, pp. 2875-2883
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2017alpha,
title = {Alpha IVBO - Construction of a Scale
to Measure the Illusion of Virtual Body
Ownership},
author = {Roth, Daniel and Lugrin, Jean-Luc and Latoschik, Marc Erich and Huber, Stephan},
booktitle = {Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
year = {2017},
pages = {2875-2883},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-chi-alpha-ivbo.pdf}
}
Abstract: In this paper, we present a scale construction and its initial test as a step towards a standardized measure of the illusion of virtual body ownership (IVBO) in virtual simulations. The IVBO describes the effect of users partly or fully perceiving a virtual body as their own. We analyzed components for a scale we call \u0027Alpha IVBO\u0027 by using data from a fake mirror scenario study. Users saw their movements mapped in real-time to a virtual avatar rendered on 3D display placed in front of them. The principal component analysis of our sample data resulted in three factors: \u0027acceptance\u0027, \u0027control\u0027 and \u0027change\u0027.
Peter Kullmann, Roman Eyck, Marc Erich Latoschik, Daniel Roth,
Augmenting Human Gaze in Avatar-Mediated Communication (Poster)
.
Poster presentations, Interdisciplinary College
, 2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{kullmann2017augmenting,
title = {Augmenting Human Gaze in Avatar-Mediated Communication (Poster)},
author = {Kullmann, Peter and Eyck, Roman and Latoschik, Marc Erich and Roth, Daniel},
year = {2017},
publisher = {Poster presentations, Interdisciplinary College},
url = {}
}
Abstract: Future social virtual environments will allow machines to transcend face-to-face interaction and manipulate human social interactions in computer-mediated communication. Whereas in real-world interactions we use our real bodies as means to mediate our message and communicate additional information, in virtual environments we are represented by avatars. Progressing from Mori’s Uncanny Valley theory 1 and similar to "the medium is the message" 2 approaches, many research activities have investigated the impact of manipulating the appearance realism of virtual characters 3. However, the impact of behavioural realism and its potential augmentation in avatar-mediated communication, is not fully understood.
The present work in progress investigates the impact of behaviour most likely disclosing human comprehension: gaze. Gaze cues are reciprocal nonverbal signals that are used to both detect information about interlocutors and communicate to them 4. Where our conversational partners focus their attention, is relevant for building rapport and interaction naturalness 5.
In an avatar-mediated communication system prototype we examine how augmenting avatars\u0027 gaze behaviour influences the gaze behaviour of humans communicating via avatars and their rating of the communication quality. We let our avatars make eye contact whenever the human it is facing speaks. We hypothesise that this will increase the quality of the social interaction and will lead to the participants acting more attentive with regard to their gaze behaviour. Furthermore, we aim to identify a quantitative impact between the degree of aforementioned manipulation and the acceptance of the communicative counterpart as human or machine. While participants adjusting their gaze behaviour in relation to the avatar augmentation implies great opportunities for using avatar-mediated communication in therapeutic applications, its ethical implications have to be addressed.
Dennis Wiebusch, Chris Zimmerer, Marc Erich Latoschik,
Cherry-Picking RIS Functionality -- Integration of Game and VR Engine Sub-Systems based on Entities and Events
, In
10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
, pp. 1-8
.
IEEE Computer Society
, 2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wiebusch2017,
title = {Cherry-Picking RIS Functionality -- Integration of Game and VR Engine Sub-Systems based on Entities and Events},
author = {Wiebusch, Dennis and Zimmerer, Chris and Latoschik, Marc Erich},
booktitle = {10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2017},
pages = {1-8},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-ieeevr-searis-cherry-picking-preprint.pdf}
}
Abstract: Modern game engines provide a variety of high-end features and sub-systems which have made them increasingly interesting for AR/VR research. Here, it often is necessary to combine features from different sources. This paper presents an approach based on entity-event state decoupling and exchange. The approach targets the combination of sub-systems from different sources which simulate functionally coherent aspects of the virtual objects like physics, graphics, AI, or developer services like state editing. The approach decouples specific internal representations using a semantic description layer for identifiers, data types, and potential relations between them. We illustrate the main concepts using examples from the combination of the Unreal Engine 4, the Unity engine, and own research software and illustrate performance related aspects as a guideline for the choice of an appropriate transport layer.
Daniel Roth, Jean-Luc Lugrin, Sebastian von Mammen, Marc Erich Latoschik,
Controllers & Inputs: Masters of Puppets
Jaime Banks (Ed.),
, Vol.
Avatar, Assembled -- The Social and Technical Anatomy of Digital Bodies, 106
, pp. 281-290
.
Peter Lang
, 2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inbook{Roth:2017aa,
title = {Controllers & Inputs: Masters of Puppets},
author = {Roth, Daniel and Lugrin, Jean-Luc and von Mammen, Sebastian and Latoschik, Marc Erich},
editor = {Banks, Jaime},
year = {2017},
volume = {Avatar, Assembled -- The Social and Technical Anatomy of Digital Bodies, 106},
pages = {281-290},
publisher = {Peter Lang},
url = {}
}
Abstract: Avatar, Assembled is a curated volume that unpacks videogame and virtual world avatars---not as a monolithic phenomenon (as they are usually framed) but as sociotechnical assemblages, pieced together from social (human-like) features like voice and gesture to technical (machine-like) features like graphics and glitches. Each chapter accounts for the empirical, theoretical, technical, and popular understandings of these avatar ``compo- nents''---60 in total---altogether offering a nuanced explication of avatars-as-assemblages as they matter in contemporary soci- ety and in individual experience. The volume is a ``crossover'' piece in that, while it delves into complex ideas, it is written in a way that will be accessible and interesting to students, researchers, designers, and practitioners alike.
Jascha Achenbach, Thomas Waltemate, Marc Erich Latoschik, Mario Botsch,
Fast Generation of Realistic Virtual Humans
, In
23rd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 12:1-12:10
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{achenbach2017generation,
title = {Fast Generation of Realistic Virtual Humans},
author = {Achenbach, Jascha and Waltemate, Thomas and Latoschik, Marc Erich and Botsch, Mario},
booktitle = {23rd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2017},
pages = {12:1-12:10},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-vrst-fast-generation-realistic-virtual-humans.pdf}
}
Abstract: In this paper we present a complete pipeline to create ready-to-animate virtual humans by fitting a template character to a point set obtained by scanning a real person using multi-view stereo reconstruction. Our virtual humans are built upon a holistic character model and feature a detailed skeleton, fingers, eyes, teeth, and a rich set of facial blendshapes. Furthermore, due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing for computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing whole characters in less than ten minutes.
Daniel Roth, Marc Erich Latoschik, Gary Bente,
Hello from the Other Side Behavioral Realism and Social Augmentation in Shared Virtual Environments
, In
Presentation at the 3rd Virtual Social Interaction Workshop, Bielefeld University, Bielefeld
.
2017.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{roth2017hello,
title = {Hello from the Other Side Behavioral Realism and Social Augmentation in Shared Virtual Environments},
author = {Roth, Daniel and Latoschik, Marc Erich and Bente, Gary},
booktitle = {Presentation at the 3rd Virtual Social Interaction Workshop, Bielefeld University, Bielefeld},
year = {2017},
url = {https://sites.google.com/view/vsi2017}
}
Marc Erich Latoschik, Dirk Reiners, Pablo Figueroa, Wesley Griffin (Eds.),
IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
2017.
To appear
[BibTeX]
[Download]
[BibSonomy]
@proceedings{latoschik2017workshop,
title = {IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Figueroa, Pablo and Griffin, Wesley},
year = {2017},
note = {To appear},
url = {}
}
Sebastian Oberdörfer, David Heidrich, Marc Erich Latoschik,
Interactive Gamified Virtual Reality Training of Affine Transformation
, In
Proceedings of DeLFI and GMW Workshops 2017
Carsten Ullrich, Martin Wessner (Eds.),
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{oberdorfer2017interactive,
title = {Interactive Gamified Virtual Reality Training of Affine Transformation},
author = {Oberdörfer, Sebastian and Heidrich, David and Latoschik, Marc Erich},
editor = {Ullrich, Carsten and Wessner, Martin},
booktitle = {Proceedings of DeLFI and GMW Workshops 2017},
year = {2017},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-vrar-learning-getit-vr.pdf}
}
Abstract: Affine transformations which are used in many engineering areas often escape an intuitive approach due to their high level of complexity and abstractness. Learners not only need to understand the basic rules of matrix algebra but are also challenged to understand how the theoretically grounded aspects result in object transformations. Therefore, we developed the Gamified Training Environment for Affine Transformation that directly encodes this abstract learning content in its game mechanics. By intuitively presenting and demanding the application of affine transformations in a virtual gamified training environment, learners train the application of the knowledge due to repetition while receiving immediate and highly immersive visual feedback about the outcomes of their inputs. Also, by providing a flow-inducing gameplay, users are highly motivated to practice their knowledge thus experiencing a higher learning quality. As the immersion, presence and spatial knowledge presentation can have a positive effect on the training outcome, GEtiT explores the effectivity of different visual immersion levels by providing a desktop and a VR version. This article presents our approach of directly encoding the abstract learning content in game mechanics, describes the conceptual design as well as technical implementation and discusses the design differences between the two GEtiT versions.
Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik,
POSTER Social Augmentations in Multi-User Virtual Reality: A Virtual Museum Experience
, In
IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)
, pp. 42-43
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2017poster,
title = {POSTER Social Augmentations in Multi-User Virtual Reality: A Virtual Museum Experience},
author = {Roth, Daniel and Kleinbeck, Constantin and Feigl, Tobias and Mutschler, Christopher and Latoschik, Marc Erich},
booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
year = {2017},
pages = {42-43},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-roth-ismar-behav-augm.pdf},
doi = {10.1109/ISMAR-Adjunct.2017.28}
}
Abstract: This work in progress report demonstrates a novel approach for be-
havioral augmentations in Virtual Reality (VR). Using a large scale
tracking system, groups of five users explored a virtual museum.
We investigated how augmenting social interactions impacts this
experience, by designing behavioral transformations for behavio-
ral phenomena in social interactions. Preliminary data indicate a
reduction of perceived isolation, and a more thought-provoking ex-
perience with active behavioral augmentation
Martin Fischbach, Dennis Wiebusch, Marc Erich Latoschik,
Semantic Entity-Component State Management Techniques to Enhance Software Quality for Multimodal VR-Systems
, In
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, Vol.
23
(
4)
, pp. 1342-1351
.
IEEE
, 2017.
DOI: 10.1109/TVCG.2017.2657098
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{fischbach2017semantic,
title = {Semantic Entity-Component State Management Techniques to Enhance Software Quality for Multimodal VR-Systems},
author = {Fischbach, Martin and Wiebusch, Dennis and Latoschik, Marc Erich},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
year = {2017},
volume = {23},
number = {4},
pages = {1342-1351},
publisher = {IEEE},
note = {DOI: 10.1109/TVCG.2017.2657098},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-ieee-tvcg-fischbach-state-management-manuscript.pdf}
}
Abstract: Modularity, modifiability, reusability, and API usability are important qualities that determine the maintainability of complex software architectures typical for Virtual, Augmented, and Mixed Reality (VR, AR, MR) applications. These architectures address various input-, output-, and processing aspects, which are usually implemented by dedicated software modules. Collectively, these modules have to maintain the real-time simulation of a coherent application state. This requirement, however, implicates multiple semantic as well as temporal state representation- and access interdependencies between modules, exacerbating maintainable solutions.
This paper presents five semantics-based software techniques for state management that extend the well-established entity-component system (ECS) pattern, foster modularity and enhance overall maintainability. A walk-through of typical implementation aspects of multimodal (speech and gesture) interfaces is used to highlight the techniques\u0027 benefits, providing a typical example for demanding software architectures in VR, AR and MR. Finally, central implementation details are compared against prominent alternatives.
Daniel Roth, Kristoffer Waldow, Marc Erich Latoschik, Arnulph Fuhrmann, Gary Bente,
Socially Immersive Avatar-Based Communication
, In
Proceedings of the 24th IEEE Virtual Reality (VR) Conference, Los Angeles, CA, 2017
, pp. 259-260
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2017socially,
title = {Socially Immersive Avatar-Based Communication},
author = {Roth, Daniel and Waldow, Kristoffer and Latoschik, Marc Erich and Fuhrmann, Arnulph and Bente, Gary},
booktitle = {Proceedings of the 24th IEEE Virtual Reality (VR) Conference, Los Angeles, CA, 2017},
year = {2017},
pages = {259-260},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-IEEEVR-socim-preprint.pdf},
doi = {10.1109/VR.2017.7892275}
}
Abstract: In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments. The proposed system is capable of tracking, transmitting, representing body motion, facial expressions, and voice via virtual avatars and inherits the transmission of human behaviors that are available in real-life social interactions. Users are immersed using active stereoscopic rendering projected onto a life-size projection plane, utilizing the concept of “fish tank” virtual reality (VR). Our prototype connects two separate rooms and allows for socially immersive avatar-mediated communication in VR.
Marc Erich Latoschik, Daniel Roth, Dominik Gall, Jascha Achenbach, Thomas Waltemate, Mario Botsch,
The Effect of Avatar Realism in Immersive Social Virtual Realities
, In
23rd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 39:1-39:10
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik2017effect,
title = {The Effect of Avatar Realism in Immersive Social Virtual Realities},
author = {Latoschik, Marc Erich and Roth, Daniel and Gall, Dominik and Achenbach, Jascha and Waltemate, Thomas and Botsch, Mario},
booktitle = {23rd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2017},
pages = {39:1-39:10},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-vrst-effect-of-avatar-realism.pdf}
}
Abstract: This paper investigates the e ect of avatar realism on embodiment and social interactions in Virtual Reality (VR). We compared abstract avatar representations based on a wooden mannequin with high fidelity avatars generated from photogrammetry 3D scan methods. Both avatar representations were alternately applied to participating users and to the virtual counterpart in dyadic social encounters to examine the impact of avatar realism on self-embodiment and social interaction quality. Users were immersed in a virtual room via a head mounted display (HMD). Their full-body movements were tracked and mapped to respective movements of their avatars. Embodiment was induced by presenting the users’ avatars to themselves in a virtual mirror. Afterwards they had to react to a non-verbal behavior of a virtual interaction partner they encountered in the virtual space. Several measures were taken to analyze the effect of the appearance of the users’ avatars as well as the effect of the appearance of the others’ avatars on the users. The realistic avatars were rated significantly more human-like when used as avatars for the others and evoked a stronger acceptance in terms of virtual body ownership (VBO). There also was some indication of a potential uncanny valley. Additionally, there was an indication that the appearance of the others’ avatars impacts the self-perception of the users.
Jonas Bruschke, Florian Niebling, Ferdinand Maiwald, Kristina Friedrichs, Markus Wacker, Marc Erich Latoschik,
Towards Browsing Repositories of Spatially Oriented Historic Photographic Images in 3D Web Environments
, In
Proceedings of Web3D ’17, Brisbane, QLD, Australia
.
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{bruschke2017towards,
title = {Towards Browsing Repositories of Spatially Oriented Historic Photographic Images in 3D Web Environments},
author = {Bruschke, Jonas and Niebling, Florian and Maiwald, Ferdinand and Friedrichs, Kristina and Wacker, Markus and Latoschik, Marc Erich},
booktitle = {Proceedings of Web3D ’17, Brisbane, QLD, Australia},
year = {2017},
url = {}
}
Abstract: Archives and museums store vast collections of historical images of urban areas and make them publicly available through online platforms. Many of these images, often containing historic buildings and landscapes, can be oriented spatially using automatic methods such as structure from motion (SfM). Providing spatially and temporally oriented images of urban architecture, in combination with advanced searching and 2D/3D exploration techniques, offers new potentials in supporting historians in their research.
We are developing a 3D web environment usable to historians to spatially search online media repositories containing historic photographic images. We combine 3D models of historic buildings with spatially oriented images, replacing text-based searching through meta-data with spatial and temporal browsing with respect to given focus points in historic city models.
Michael Rojkov, Doris Aschenbrenner, Klaus Schilling, Marc Erich Latoschik,
Tracking Algorithms for Cooperative Telemaintenance Repair Operations
, In
IFAC-PapersOnLine
, Vol.
50
(
1)
, pp. 331-336
.
2017.
20th IFAC World Congress
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{rojkov2017tracking,
title = {Tracking Algorithms for Cooperative Telemaintenance Repair Operations},
author = {Rojkov, Michael and Aschenbrenner, Doris and Schilling, Klaus and Latoschik, Marc Erich},
journal = {IFAC-PapersOnLine},
year = {2017},
volume = {50},
number = {1},
pages = {331-336},
note = {20th IFAC World Congress},
url = {http://www.sciencedirect.com/science/article/pii/S2405896317300757}
}
Abstract: This paper broaches the issue of using 2D tracking algorithm frameworks on mobile devices in order to enable cooperative repair tasks in robot-operated industrial production. A local service technician can be supervised remotely by an external expert in the context of telemaintenance. At first we introduce several tracking libraries and test them on low-performance mobile devices. After that we conduct a test run on a large amount of pictures derived from video streams with different characteristics of the application area of industrial maintenance and repair operations in order to find the best suited algorithm. At the end we describe the implementation on several mobile devices and delay considerations.
Jernej Barbic, Mirabelle D'Cruz, Marc Erich Latoschik, Mel Slater, Patrick Bourdot (Eds.),
Virtual Reality and Augmented Reality, Proceedings of the 14th EuroVR International Conference, EuroVR 2017
, Vol.
10700
.
Springer
, 2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{barbic2017virtual,
title = {Virtual Reality and Augmented Reality, Proceedings of the 14th EuroVR International Conference, EuroVR 2017},
editor = {Barbic, Jernej and D'Cruz, Mirabelle and Latoschik, Marc Erich and Slater, Mel and Bourdot, Patrick},
year = {2017},
volume = {10700},
publisher = {Springer},
url = {http://www.springer.com/de/book/9783319723228?wt_mc=Internal.Event.1.SEM.BookAuthorCongrat}
}
Abstract: This book constitutes the refereed proceedings of the 14th International Conference on Virtual Reality and Augmented Reality, EuroVR 2017, held in Laval, France, in December 2017.
The 10 full papers and 2 short papers presented were carefully reviewed and selected from 36 submissions. The papers are organized in four topical sections: interaction models and user studies, visual and haptic real-time rendering, perception and cognition, and rehabilitation and safety.
Marc Erich Latoschik, Franz Weilbach, Negin Hamzeheinejad, Dominik Gall,
VRGait: An Immersive Virtual Reality System for Gait-Specific Neurorehabilitation and Therapy
, In
European Congress of NeuroRehabilitation 2017: ECNR2017
.
Lausanne
2017.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik2017vrgait,
title = {VRGait: An Immersive Virtual Reality System for Gait-Specific Neurorehabilitation and Therapy},
author = {Latoschik, Marc Erich and Weilbach, Franz and Hamzeheinejad, Negin and Gall, Dominik},
booktitle = {European Congress of NeuroRehabilitation 2017: ECNR2017},
year = {2017},
address = {Lausanne},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2017-ecnr-vr-gait.pdf}
}
Abstract: Fortunately, physiotherapy and repetitive movement during exercises allows to regain motor functions lost by neurological injuries, i.e., caused by strokes. Exercising is exhausting and patients’ motivation plays a central role in the effectiveness or such therapies. Unfortunately, dedicated therapy equipment often requires to restrict users or to be setup in lesser attractive environments due to, e.g., building statics. In addition, such systems currently do not exploit the potential of adaptive training stimuli based on movement mimicry. We introduce first results of a VR based gait rehabilitation system: VRGait immerses patients into alternative virtual environments and maps their movements on the therapy device to movements in the virtual worlds (see Figure 1). VRGait’s goals are to strengthen therapy effectiveness (1) by increased motivation from inspiring walking escapes (think beach or mountain scenes) and gamified tasks and (2) by exploiting motor mimicry caused by their controlled virtual avatars or counterparts walking together with them.
2016
Martin Fischbach, Hendrik Striepe, Marc Erich Latoschik, Birgit Lugrin,
A Low-cost, Variable, Interactive Surface for Mixed-Reality Tabletop Games
, In
22nd ACM Symposium on Virtual Reality Software and Technology (VRST 2016)
, pp. 297-298
.
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2016lowcost,
title = {A Low-cost, Variable, Interactive Surface for Mixed-Reality Tabletop Games},
author = {Fischbach, Martin and Striepe, Hendrik and Latoschik, Marc Erich and Lugrin, Birgit},
booktitle = {22nd ACM Symposium on Virtual Reality Software and Technology (VRST 2016)},
year = {2016},
pages = {297-298},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N21397}
}
Abstract: This paper introduces an interactive surface concept for Mixed Reality (MR) tabletop games that combines a variable (LCD and/or projection) screen configuration with the detection of finger touches, in-air gestures, and tangibles. It is low-cost and minimally requires an ordinary table, a TV screen, and a Kinect v2 sensor. Existing applications can easily be connected by being compliant to standards. The concept is intended to foster further research on collaborative tabletop situations, not limited to games, but also in- cluding learning, meetings, and social interaction.
Daniel Roth, Jean-Luc Lugrin, Julia Büser, Gary Bente, Arnulph Fuhrmann, Marc Erich Latoschik,
A Simplified Inverse Kinematic Approach for Embodied VR Applications
, In
in Proceedings of the 23rd IEEE Virtual Reality (VR) conference
, pp. 275-276
.
2016.
Best Poster Award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2016simplified,
title = {A Simplified Inverse Kinematic Approach for Embodied VR Applications},
author = {Roth, Daniel and Lugrin, Jean-Luc and Büser, Julia and Bente, Gary and Fuhrmann, Arnulph and Latoschik, Marc Erich},
booktitle = {in Proceedings of the 23rd IEEE Virtual Reality (VR) conference},
year = {2016},
pages = {275-276},
note = {Best Poster Award 🏆},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-simplified-ik.pdf},
doi = {10.1109/VR.2016.7504760}
}
Abstract: In this paper, we compare a full body marker set with a reduced
rigid body marker set supported by inverse kinematics. We measured
system latency, illusion of virtual body ownership, and task
load in an applied scenario for inducing acrophobia. While not
showing a significant change in body ownership or task performance,
results do show that latency and task load are reduced when
using the rigid body inverse kinematics solution. The approach
therefore has the potential to improve virtual reality experiences.
Sascha Link, Berit Barkschat, Chris Zimmerer, Martin Fischbach, Dennis Wiebusch, Jean-Luc Lugrin, Marc Erich Latoschik,
An Intelligent Multimodal Mixed Reality Real-Time Strategy Game
, In
Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
, pp. 223-224
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{2016:linkaa,
title = {An Intelligent Multimodal Mixed Reality Real-Time Strategy Game},
author = {Link, Sascha and Barkschat, Berit and Zimmerer, Chris and Fischbach, Martin and Wiebusch, Dennis and Lugrin, Jean-Luc and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
pages = {223-224},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-poster-xroads-manuscript-reduced-file-size.pdf}
}
Abstract: This paper presents a mixed reality tabletop role-playing game with a novel combination of interaction styles and gameplay mechanics. Our contribution extends previous approaches by abandoning the traditional turn-based gameplay in favor of simultaneous real-time interaction. The increased cognitive and physical load during the simultaneous control of multiple game characters is counteracted by two features: First, certain game characters are equipped with AI-driven capabilities to become semi-autonomous virtual agents. Second, (groups of) these agents can be instructed by high-level commands via a multimodal—speech and gesture—interface.
Jean-Luc Lugrin, David Obremski, Daniel Roth, Marc Erich Latoschik,
Audio Feedback and Illusion of Virtual Body Ownership in Mixed Reality
, In
Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology
, pp. 309-310
.
New York, NY, USA
:
ACM
, 2016.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{Lugrin:2016:AFI:2993369.2996319,
title = {Audio Feedback and Illusion of Virtual Body Ownership in Mixed Reality},
author = {Lugrin, Jean-Luc and Obremski, David and Roth, Daniel and Latoschik, Marc Erich},
booktitle = {Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology},
year = {2016},
pages = {309-310},
publisher = {ACM},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-vrst-audiofeedback-and-ivbo.pdf},
doi = {10.1145/2993369.2996319}
}
Jean-Luc Lugrin, Ivan Polyschev, Daniel Roth, Marc Erich Latoschik,
Avatar Anthropomorphism and Acrophobia
, In
Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology
, pp. 315-316
.
New York, NY, USA
:
ACM
, 2016.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{Lugrin:2016:AAA:2993369.2996313,
title = {Avatar Anthropomorphism and Acrophobia},
author = {Lugrin, Jean-Luc and Polyschev, Ivan and Roth, Daniel and Latoschik, Marc Erich},
booktitle = {Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology},
year = {2016},
pages = {315--316},
publisher = {ACM},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-vrst-Avatar-antro-and-fear-of-heights.pdf},
doi = {10.1145/2993369.2996313}
}
Daniel Roth, Jean-Luc Lugrin, Dmitri Galakhov, Arvid Hofmann, Gary Bente, Marc Erich Latoschik, Arnulph Fuhrmann,
Avatar Realism and Social Interaction Quality in Virtual Reality
, In
Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
, pp. 277-278
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{roth2016avatar,
title = {Avatar Realism and Social Interaction Quality in Virtual Reality},
author = {Roth, Daniel and Lugrin, Jean-Luc and Galakhov, Dmitri and Hofmann, Arvid and Bente, Gary and Latoschik, Marc Erich and Fuhrmann, Arnulph},
booktitle = {Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
pages = {277-278},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-poster-interaction-qualilty.pdf},
doi = {10.1109/VR.2016.7504761}
}
Abstract: In this paper, we describe an experimental method to investigate the effects of reduced social information and behavioral channels in immersive virtual environments with full-body avatar embodiment. We compared physical-based and verbal-based social interactions in real world (RW) and virtual reality (VR). Participants were represented by abstract avatars that did not display gaze, facial expressions or social cues from appearance. Our results show significant differences in terms of presence and physical performance. However, differences in effectiveness in the verbal task were not present. Participants appear to efficiently compensate for missing social and behavioral cues by shifting their attentions to other behavioral channels.
Marc Erich Latoschik, Jean-Luc Lugrin, Michael Habel, Daniel Roth, Christian Seufert, Silke Grafe,
Breaking Bad Behavior: Immersive Training of Class Room Management
, In
Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology
, pp. 317-318
.
New York, NY, USA
:
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{Latoschik:2016:BBB:2993369.2996308,
title = {Breaking Bad Behavior: Immersive Training of Class Room Management},
author = {Latoschik, Marc Erich and Lugrin, Jean-Luc and Habel, Michael and Roth, Daniel and Seufert, Christian and Grafe, Silke},
booktitle = {Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology},
year = {2016},
pages = {317--318},
publisher = {ACM},
address = {New York, NY, USA},
url = {http://dl.acm.org/authorize?N40662},
doi = {10.1145/2993369.2996308}
}
Abstract: This article presents an immersive virtual reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behavior in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. This will allow lecturers to link theory with practice using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console, which renders a view of the class and the teacher whose avatar movements are captured by a marker less tracking system. This console includes a 2D graphics menu with convenient behavior and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability, and mobility). Our initial results are promising and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience.
Jean-Luc Lugrin, Marc Erich Latoschik, Michael Habel, Daniel Roth, Christian Seufert, Silke Grafe,
Breaking Bad Behaviors: A New Tool for Learning Classroom Management using Virtual Reality
, In
Frontiers in ICT
, Vol.
3
, p. 26
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/fict.2016.00026,
title = {Breaking Bad Behaviors: A New Tool for Learning Classroom Management using Virtual Reality},
author = {Lugrin, Jean-Luc and Latoschik, Marc Erich and Habel, Michael and Roth, Daniel and Seufert, Christian and Grafe, Silke},
journal = {Frontiers in ICT},
year = {2016},
volume = {3},
pages = {26},
url = {http://journal.frontiersin.org/article/10.3389/fict.2016.00026},
doi = {10.3389/fict.2016.00026}
}
Abstract: This article presents an immersive Virtual Reality (VR) system for training classroom
management skills, with a specific focus on learning to manage disruptive student behaviour
in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D
virtual simulation of a classroom, populated by twenty-four semi-autonomous virtual students.
The system has been designed as a companion tool for classroom management seminars in
a syllabus for primary and secondary school teachers. Whereby, it will allow lecturers to link
theory with practice, using the medium of VR. The system is therefore designed for two users: a
trainee teacher and an instructor supervising the training session. The teacher is immersed in a
real-time 3D simulation of a classroom by means of a head-mounted display and headphone.
The instructor operates a graphical desktop console which renders a view of the class and the
teacher, whose avatar movements are captured by a marker-less tracking system. This console
includes a 2D graphics menu with convenient behaviour and feedback control mechanisms to
provide human-guided training sessions. The system is built using low-cost consumer hardware
and software. Its architecture and technical design are described in detail. A first evaluation
confirms its conformance to critical usability requirements (i.e., safety and comfort, believability,
simplicity, acceptability, extensibility, affordability and mobility). Our initial results are promising,
and constitute the necessary first step toward a possible investigation of the efficiency and
effectiveness of such a system in terms of learning outcomes and experience.
Daniel Roth, Marc Erich Latoschik, Arnulph Fuhrmann, Gary Bente,
Effects of Behavioral Realism and Physical Appearance on Social Interaction Quality in Shared Virtual Environments
, In
Presentation at the 2nd Virtual Social Interaction Workshop, Media City, UK
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2016effects,
title = {Effects of Behavioral Realism and Physical Appearance on Social Interaction Quality in Shared Virtual Environments},
author = {Roth, Daniel and Latoschik, Marc Erich and Fuhrmann, Arnulph and Bente, Gary},
booktitle = {Presentation at the 2nd Virtual Social Interaction Workshop, Media City, UK},
year = {2016},
url = {}
}
Abstract: Avatar-mediated social interactions in shared virtual environments (SVEs) considerably differ
from real-life interactions, as they often lack in reproducing the full range of social signals
and behaviors (in real-time), such as facial expression and gaze. We investigated the impact
of these deficiencies on the quality of social interaction. In a first study (N=36), we compared
a motor driven collaborative task and a verbal driven negotiation task in real world and SVE.
Users were immersed using head-mounted-displays and represented as mannequin-like
avatars controlled with body tracking. The results suggest significant differences in
networked minds/presence factors as well as motor performance. However, functional
aspects of the negotiation task did not show significant differences. In a second study
(N=64), we replicated the negotiation task and paid special attention to judgments of
Stephan Rehfeld, Marc Erich Latoschik, Henrik Tramberend,
Estimating latency and concurrency of Asynchronous Real-Time Interactive Systems using Model Checking
, In
Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
, pp. 57-66
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{rehfeld2016estimating,
title = {Estimating latency and concurrency of Asynchronous Real-Time Interactive Systems using Model Checking},
author = {Rehfeld, Stephan and Latoschik, Marc Erich and Tramberend, Henrik},
booktitle = {Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
pages = {57-66},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-model-checking.pdf}
}
Abstract: This article introduces model checking as an alternative method to estimate the latency and parallelism of asynchronous Realtime Interactive Systems (RISs). Five typical concurrency and synchronization schemes often found in concurrent Virtual Reality (VR) and computer game systems are identified as use-cases. These use-cases guide the development a) of software primitives necessary for the use-case implementation based on asynchronous RIS architectures and b) of a graphical editor for the specification of various concurrency and synchronization schemes (including the use-cases) based on these primitives. Several model-checking tools are evaluated against typical requirements in the RIS area. As a re- sult, the formal model checking language Rebeca and its model checker RMC are applied to the specification of the use-cases to estimate latency and parallelism for each case. The estimations are compared to measured results achieved by classical profiling from a real-world application. The estimated results of the latencies by model checking approximated the measured results adequately with a minimal difference of 3.9% in the best case and -26.8% in the worst case. It also detected a problematic execution path not covered by the stochastic nature of the measured profiling samples. The estimated results of the degree of parallelization by model checking are approximated with an minimal difference of 9.3% and a maximal difference of -28.8%. Finally, the effort of model checking is compared to the effort of implementing and profiling a RIS.
Jean-Luc Lugrin, David Zilch, Daniel Roth, Gary Bente, Marc Erich Latoschik,
FaceBo: Real-Time Face and Body Tracking for Faithful Avatar Synthesis
, In
in Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
IEEE (Ed.),
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lugrin2016facebo,
title = {FaceBo: Real-Time Face and Body Tracking for Faithful Avatar Synthesis},
author = {Lugrin, Jean-Luc and Zilch, David and Roth, Daniel and Bente, Gary and Latoschik, Marc Erich},
editor = {IEEE, },
booktitle = {in Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieee-vr-poster-facebo.pdf}
}
Abstract: This paper introduces a low-cost framework capable of combining both real-time markerless face and body tracking for faithful avatar embodiment in Virtual Reality (VR). We discuss suitable hardware and software solutions and present a first prototype. This work lays the technological basis for further research on the importance of the appearance and behavioral realism of avatars, e.g., for the illusion of virtual body ownership, for social interactions in VR, as well as for VR entertainment applications (immersive games or movies).
Marc Erich Latoschik, Jean-Luc Lugrin, Daniel Roth,
FakeMi: A Fake Mirror System for Avatar Embodiment Studies
, In
Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 73-76
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik2016fakemi,
title = {FakeMi: A Fake Mirror System for Avatar Embodiment Studies},
author = {Latoschik, Marc Erich and Lugrin, Jean-Luc and Roth, Daniel},
booktitle = {Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2016},
pages = {73-76},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-vrst-fake-mirror-system-for-avatar-embodiment-studies-preprint.pdf}
}
Abstract: This paper introduces a fake mirror system as a research tool to study the effect of avatar embodiment with a non-immersive virtual environment. The system combines marker-less face and body tracking to animate the individual avatars seen in a stereoscopic display with a correct perspective projection. The display dimensions match typical dimensions of a real physical mirror and the animated avatars are rendered based on a geometrically correct reflection as expected from a real mirror including correct body and face animations. The first evaluation of the system reveals the high acceptance of the setup as well as a convincing illusion of a real mirror with different types of avatars.
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa (Eds.),
IEEE 9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE digital library
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{latoschik2016software,
title = {IEEE 9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo},
year = {2016},
publisher = {IEEE digital library},
url = {http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7551364}
}
Abstract: SEARIS provides a forum for researchers and practitioners working on the design, development, and support of realtime interactive systems (RIS). These systems span from Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) environments to novel Human-Computer Interaction systems (such as multimodal or multitouch architectures) and entertainment applications in general. Their common principle is a strong user centric orientation which requires real-time processing of simulation aspects as well as input/output events according to perceptual constraints. Therefore, we encourage researchers and developers of real-time human computer interaction systems of all flavors to share their experiences and learn from each other during this workshop.
Doris Aschenbrenner, Marc Erich Latoschik, Klaus Schilling,
Industrial Maintenance with Augmented Reality: Two Case Studies
, In
Proceedings of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)
.
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{aschenbrenner2016industrial,
title = {Industrial Maintenance with Augmented Reality: Two Case Studies},
author = {Aschenbrenner, Doris and Latoschik, Marc Erich and Schilling, Klaus},
booktitle = {Proceedings of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2016},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N40675}
}
Abstract: Remote maintenance of industrial manipulators often is performed via telephone support. Recent approaches in the context of the \u0027Industry 4.0\u0027 consider internet technologies and Augmented Reality (AR) to enhance situation awareness between external experts and local service technicians. We present two AR-based case studies: First, a mobile AR architecture based on optical see through glasses is used for an on-site local repair task. Second, a remote architecture based on a portable tablet PC and a high precision tracking system is used to realize an off-site expert access. The to-be-serviced machine is visualized inside of a large area similar to a machinery hall and can be inspected by the experts walking around this virtual plant using the tablet and perspectively correct rendering to understand the production process and the operation context. Both methods have been evaluated in first user studies.
Sebastian Oberdörfer, Marc Erich Latoschik,
Interactive Gamified 3D-Training of Affine Transformations
, In
Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 343-344
.
New York, NY, USA
:
Association for Computing Machinery
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{oberdorfer2016interactive,
title = {Interactive Gamified 3D-Training of Affine Transformations},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
booktitle = {Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2016},
pages = {343-344},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-interactive-gamified-3d-training-of-affine-transformations.pdf},
doi = {10.1145/2993369.2996314}
}
Abstract: This article presents the Gamified Training Environment for Affine Transformations (GEtiT). GEtiT uses a 3D environment to visualize the effects of object rotation, translation, scaling, reflection, and shearing in 3D space. It encodes the abstract knowledge about homogeneous transformations and their order of application using specific game mechanics encoding 3D movements on different levels of abstraction. Progress in the game requires mastering of the game mechanics of a certain level of abstraction to modify objects in 3D space to a desired goal position and/or shape. Each level increases the abstraction of the representation towards a final 4 × 4 homogeneous matrix representation. Executing the game mechanics during the gameplay results in an effective training of knowledge due to a constant repetition. Evaluation showed a learning effect that is equal to a traditional training method while it achieved a higher enjoyment of use indicating that the learning quality was superior to the traditional training method.
Dennis Wiebusch, Martin Fischbach, Florian Niebling, Marc Erich Latoschik,
Low-Cost Raycast-based Coordinate System Registration for Consumer Depth Cameras
, In
Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wiebusch2016lowcost,
title = {Low-Cost Raycast-based Coordinate System Registration for Consumer Depth Cameras},
author = {Wiebusch, Dennis and Fischbach, Martin and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieeevr-poster-calibration.pdf}
}
Abstract: We present four raycast-based techniques that determine the transformation between a depth camera\u0027s coordinate system and the coordinate system defined by a rectangular surface. In addition, the surface\u0027s dimensions are measured. In contrast to other approaches, these techniques limit additional hardware requirements to commonly available, low-cost artifacts and focus on simple non-laborious procedures. A preliminary study examining our Kinect~v2-based proof of concept revealed promising first results. The utilized software is available as an open-source project.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Maintainable Management and Access of Lexical Knowledge for Multimodal Virtual Reality Interfaces
, In
Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 347-348
.
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{zimmerer2016maintainable,
title = {Maintainable Management and Access of Lexical Knowledge for Multimodal Virtual Reality Interfaces},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2016},
pages = {347-348},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N40677}
}
Abstract: This poster presents a maintainable method to manage lexical information required for multimodal interfaces. It is tailored for the application in real-time interactive systems, specifically for Virtual Reality, and solves three problems commonly encountered in this context: (1) The lexical information is defined on and grounded in a common knowledge representation layer (KRL) based on OWL. The KRL describes application objects and possible system functions in one place and avoids error-prone redundant data management. (2) The KRL is tightly integrated into the simulator platform using a semantically enriched object model that is auto-generated from the KRL and thus fosters high performance access. (3) A well-defined interface provides application wide access to semantic application state information in general and the lexical information in specific, which greatly contributes to decoupling, maintainability, and reusability.
B. Eckstein, Jean-Luc Lugrin, D. Wiebusch, M.E. Latoschik,
PEARS - Physics Extension And Representation through Semantics
, In
IEEE Transactions on Computational Intelligence and AI in Games
, Vol.
8
(
2)
, pp. 178-189
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{7347376,
title = {PEARS - Physics Extension And Representation through Semantics},
author = {Eckstein, B. and Lugrin, Jean-Luc and Wiebusch, D. and Latoschik, M.E.},
journal = {IEEE Transactions on Computational Intelligence and AI in Games},
year = {2016},
volume = {8},
number = {2},
pages = {178-189},
url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7347376},
doi = {10.1109/TCIAIG.2015.2505404}
}
Abstract: Today\u0027s physics engines mainly simulate classical mechanics and rigid body dynamics, with some late advances also capable of simulating massive particle systems and some approximations of fluid dynamics. An accurate numerical simulation of complex non-mechanical processes in real-time is beyond the state-of-the-art in the respective fields. This article illustrates an alternative approach to a purely numerical solution. It uses a semantic representation of physical properties and processes as well as a reasoning engine to model cause and effect between objects, based on their material properties. Classical collision detection is combined with semantic rules to model various physical processes, e.g., in the areas of thermodynamics, electrodynamics, and fluid dynamics as well as chemical processes. Each process is broken down into fine-grained sub-processes capable of approximating continuous transitions with discretized state changes. Our system applies these high-level state descriptions to low-level value changes, which are directly mapped to a graphical representation of the scene. We demonstrate our framework\u0027s ability to support multiple complex, causally connected physical and chemical processes by simulating a Goldberg machine. Our performance benchmarks validate its scalability and potential application for entertainment or edutainment purposes.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Reducing Application-Stage Latencies For Real-Time Interactive Systems
, In
9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE Computer Society
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{stauffert2016reducing,
title = {Reducing Application-Stage Latencies For Real-Time Interactive Systems},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2016},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-searis-application-latency-preprint.pdf}
}
Abstract: Latency is a pressing problem in Virtual Reality (VR) applications. Low latencies are required for VR to reduce perceptual artifacts and cyber sickness. Additionally, latency jitter denotes the variance in the pattern of latency changes which additionally may cause unwanted effects. This paper analyzes latency jitter caused by typical inter-thread communication (ITC) techniques commonly used in today’s computer systems employed for VR, the influence of the operating system scheduler, and the effect of different garbage collection (GC) methods to understand their effect on latency spikes, here for different Java Virtual Machines (JVM). We measure the scalability and latencies for various ITC techniques with an increasing number of threads and actors performing prototypical concurrent tasks. Four different benchmark implementations on a vanilla Linux kernel as well as on a real-time (RT) Linux kernel assess if a RT variant of a multiuser multiprocess operating system can prevent latency spikes and how this behavior would apply to different programming languages and ITC techniques.
We confirmed that scheduler and prioritization of the VR application both play an important role and identified the impact they have on the implementation strategies. Also, Linux RT can limit the latency jitter at the cost of throughput for certain implementations. As expected, the choice of a GC method also is critical and will change the latency patterns drastically. As a result, we suggest that coarse grained concurrency should be employed to avoid adding up of scheduler latencies and unwanted latency jitter for the native ITC case, while actor systems are found to support a higher degree of concurrency granularity and a higher level of abstraction.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Reducing Application-Stage Latencies of Interprocess Communication Techniques for Real-Time Interactive Systems
, In
in Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{stauffert2016reducingp,
title = {Reducing Application-Stage Latencies of Interprocess Communication Techniques for Real-Time Interactive Systems},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {in Proceedings of the 23rd IEEE Virtual Reality (IEEE VR) conference},
year = {2016},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-ieeevr-poster-rtlatency.pdf}
}
Abstract: Latency jitter is a pressing problem in Virtual Reality (VR) applications.
This paper analyzes latency jitter caused by typical interprocess communication (IPC) techniques commonly found in today\u0027s computer systems used for VR.
Test programs measure the scalability and latencies for various IPC techniques, where increasing number of threads are performing the same task concurrently. We use four different implementations on a vanilla Linux kernel as well as on a real-time (RT) Linux kernel to further assess if a RT variant of a multiuser multiprocess operating system can prevent latency spikes and how this behavior would apply to different programming languages and IPC techniques. We found that Linux RT can limit the latency jitter at the cost of throughput for certain implementations. Further, coarse grained concurrency should be employed to avoid adding up of scheduler latencies, especially for native system space IPC, while actor systems are found to support a higher degree of concurrency granularity and a higher level of abstraction.
Dominik Gall, Jean-Luc Lugrin, Dennis Wiebusch, Marc Erich Latoschik,
Remind Me: An Adaptive Recommendation-Based Simulation of Biographic Associations
, In
Proceedings of the 21th International Conference on Intelligent User Interfaces (IUI)
, pp. 191-195
.
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{gall2016remind,
title = {Remind Me: An Adaptive Recommendation-Based Simulation of Biographic Associations},
author = {Gall, Dominik and Lugrin, Jean-Luc and Wiebusch, Dennis and Latoschik, Marc Erich},
booktitle = {Proceedings of the 21th International Conference on Intelligent User Interfaces (IUI)},
year = {2016},
pages = {191-195},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-IUI-Remind_Me-Camera_Ready.pdf}
}
Abstract: Classical reminiscence therapy has been shown to effectively enhance the stability of memory and identity in people with dementia. Typically, reminiscence therapy uses biography artifacts like photos and personal items and objects. Today, many of these artifacts are from the digital realm providing new options to adapt or even improve the purely analog therapy. In this work we propose a method to enhance reminiscence therapy by computer simulated biographic associations. Our approach provides assistance for associative reasoning on affective stimuli and thus enables access to biographic content so that no deliberate search is required. We develop a recommender model for mapping mental states to biographic content based on similarity. The system dynamically adapts its state and the depicted digital artifacts to the responses of the user. It is a first step towards an immersive reminiscence therapy which will incorporate associated stimuli on multiple channels to increase effectiveness. A preliminary study showed encouraging results concerning the usability of the system.
Martin Fischbach, Dennis Wiebusch, Marc Erich Latoschik,
Semantics-based Software Techniques for Maintainable Multimodal Input Processing in Real-time Interactive Systems
, In
9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE Computer Society
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2016semanticsbased,
title = {Semantics-based Software Techniques for Maintainable Multimodal Input Processing in Real-time Interactive Systems},
author = {Fischbach, Martin and Wiebusch, Dennis and Latoschik, Marc Erich},
booktitle = {9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2016},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-searis-fischbach-manuscript.pdf}
}
Abstract: Maintainability, i.e. reusability, modifiability, and modularity, is a critical non-functional quality requirement, especially for software frameworks. Its fulfilment is already challenging for low-interactive application areas. It is additionally complicated by complex system designs of Real-time Interactive Systems (RISs), required for Augmented, Mixed, and Virtual Reality, as well as computer games. If such systems incorporate AI methods, as required for the implementation of multimodal interfaces or smart environments, it is even further exacerbated. Existing approaches strive to establish software technical solutions to support the close temporal and semantic coupling required for multimodal processing and at the same time preserve a general decoupling principle between involved software modules. We present two key solutions that target the semantic coupling issue: (1) a semantics-based access scheme to principal elements of the application state and (2) the specification of effects by means of semantic function descriptions for multimodal processing. Both concepts are modeled in an OWL ontology. The applicability of our concepts is showcased by a prototypical implementation and explained by an interaction example that is applied for two application areas.
Daniel Roth, Kristoffer Waldow, Felix Stetter, Gary Bente, Marc Erich Latoschik, Arnulph Fuhrmann,
SIAMC - A Socially Immersive Avatar Mediated Communication Platform
, In
Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST 2016, Munich, Germany, 2-4 November
Dieter Kranzlmüller, Gudrun Klinker (Eds.),
, pp. 357-358
.
ACM
, 2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2016siamc,
title = {SIAMC - A Socially Immersive Avatar Mediated Communication Platform},
author = {Roth, Daniel and Waldow, Kristoffer and Stetter, Felix and Bente, Gary and Latoschik, Marc Erich and Fuhrmann, Arnulph},
editor = {Kranzlmüller, Dieter and Klinker, Gudrun},
booktitle = {Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST 2016, Munich, Germany, 2-4 November},
year = {2016},
pages = {357-358},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N40664}
}
Abstract: In this paper, we present a avatar-mediated communication platform
for socially immersive interaction in virtual reality (VR). Our
approach is based on the combination of body tracking, facial expression
tracking and ”fishtank” VR. Our prototype enables two
remote users to communicate via avatars.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Towards Comparable Evaluation Methods and Measures for Timing Behaviour of Virtual Reality Systems
, In
Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)
, pp. 47-50
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{stauffert2016towards,
title = {Towards Comparable Evaluation Methods and Measures for Timing Behaviour of Virtual Reality Systems},
author = {Stauffert, Jan-Philipp and Niebling, Florian and Latoschik, Marc Erich},
booktitle = {Proceeding of the 22nd ACM Symposium on Virtual Reality Software and Technology (VRST)},
year = {2016},
pages = {47-50},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-acmvrst-towards-comparable-evaluation-methods-and-measures-for-timing-behavior-of-virtual-reality-systems-preprint.pdf}
}
Abstract: A low latency is a fundamental timeliness requirement to reduce the potential risks of cyber sickness and to increase effectiveness, efficiency, and user experience of Virtual Reality Systems. The effects of uniform latency degradation based on mean or worst-case values are well researched. In contrast, the effects of latency jitter, the distribution pattern of latency changes over time has largely been ignored so far although today\u0027s consumer VR systems are extremely vulnerable in this respect. We investigate the applicability of the Walsh, generalized ESD, and the modified z-score test for the detection of outliers as one central latency distribution aspect. The tests are applied to well defined test cases mimicking typical timing behavior expected from concurrent architectures of today. We introduce accompanying graphical visualization methods to inspect, analyze and communicate the latency behavior of VR systems beyond simple mean or worst-case values. As a result, we propose a stacked modified z-score test for more detailed analysis.
Daniel Roth, Carola Bloch, Anne-Kathrin Wilbers, Marc Erich Latoschik, Kai Kaspar, Gary Bente,
What You See is What You Get: Channel Dominance in the Decoding of Affective Nonverbal Behavior Displayed by Avatars
, In
Presentation at the 66th Annual Conference of the International Communication Association (ICA), June 9-13 2016, Fukuoka, Japan
.
2016.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{roth2016channel,
title = {What You See is What You Get: Channel Dominance in the Decoding of Affective Nonverbal Behavior Displayed by Avatars},
author = {Roth, Daniel and Bloch, Carola and Wilbers, Anne-Kathrin and Latoschik, Marc Erich and Kaspar, Kai and Bente, Gary},
booktitle = {Presentation at the 66th Annual Conference of the International Communication Association (ICA), June 9-13 2016, Fukuoka, Japan},
year = {2016},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2016-Roth-WYSIWYG.pdf}
}
Abstract: Nonverbal expressions of emotions play an important role in social interactions.
Regarding virtual environments (VEs) and the transmission of nonverbal cues in avatar-mediated
communication, knowledge of the contribution of nonverbal channels to emotion
recognition is essential. This study analyzed the impact of emotional expressions in faces and
body motion on emotion recognition. Motion capture data of expressive body movements
from actors portraying either anger or happiness were animated using avatars with congruent
and incongruent facial expressions. Participants viewed the resulting animations and rated the
perceived emotion. During stimulus presentation, gaze behavior was recorded. The analysis
of the rating results and visual attention patterns indicates that humans predominantly judge
emotions based on the facial expression and pay higher attention to the head region as an
information source to recognize emotions. This implicates that the transmission of facial
expression is of importance for the design of social VEs.
2015
Jean-Luc Lugrin, Johanna Latt, Marc Erich Latoschik,
Anthropomorphism and Illusion of Virtual Body Ownership
, In
International Conference on Artificial Reality and Telexistence/Eurographics Symposium on Virtual Environments (ICAT/EGVE)
, pp. 1-8
.
2015.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lugrin2015anthropomorphism,
title = {Anthropomorphism and Illusion of Virtual Body Ownership},
author = {Lugrin, Jean-Luc and Latt, Johanna and Latoschik, Marc Erich},
booktitle = {International Conference on Artificial Reality and Telexistence/Eurographics Symposium on Virtual Environments (ICAT/EGVE)},
year = {2015},
pages = {1-8},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-icat-avatar-anthro-and-ivbo.pdf}
}
Abstract: In this paper we present a novel experiment to explore the impact of avatar realism on the illusion of virtual body ownership (IVBO) in immersive virtual environments, with full-body avatar embodiment and freedom of movement. We evaluated four distinct avatars presenting an increasing level of anthropomorphism in their detailed compositions. Our results revealed that each avatar elicited a relatively high level of illusion. However both machine-like and cartoon-like avatars elicited an equivalent IVBO, slightly superior to the human-ones. A realistic human appearance is therefore not a critical top-down factor of IVBO, and could lead to an Üncanny Valley" effect.
Jean-Luc Lugrin, Johanna Latt, Marc Erich Latoschik,
Avatar Anthropomorphism and Illusion of Body Ownership in VR
, In
in proceeding of the 25th International Conference on Artificial Reality, ICAT EGVE \u002715 Proceedings Telexistence 20th Eurographics Symposium on Virtual Environments
.
2015.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2015avatar,
title = {Avatar Anthropomorphism and Illusion of Body Ownership in VR},
author = {Lugrin, Jean-Luc and Latt, Johanna and Latoschik, Marc Erich},
booktitle = {in proceeding of the 25th International Conference on Artificial Reality, ICAT EGVE \u002715 Proceedings Telexistence 20th Eurographics Symposium on Virtual Environments},
year = {2015},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-icat-avatar-anthro-and-ivbo.pdf}
}
Jean-Luc Lugrin, Johanna Latt, Marc Erich Latoschik,
Avatar Anthropomorphism and Illusion of Body Ownership in VR
, In
Proceedings of the IEEE VR 2015
.
2015.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2015avatar,
title = {Avatar Anthropomorphism and Illusion of Body Ownership in VR},
author = {Lugrin, Jean-Luc and Latt, Johanna and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE VR 2015},
year = {2015},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-ieeevr-avatar-anthro-and-ivbo-poster.pdf}
}
Jean-Luc Lugrin, Maximilian Landeck, Marc Erich Latoschik,
Avatar Embodiment Realism and Virtual Fitness Training
, In
Proceedings of the IEEE VR 2015
, pp. 225-226
.
2015.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2015avatar,
title = {Avatar Embodiment Realism and Virtual Fitness Training},
author = {Lugrin, Jean-Luc and Landeck, Maximilian and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE VR 2015},
year = {2015},
pages = {225-226},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-ieeevr-avatar-and-virtual-fitness.pdf}
}
Dennis Wiebusch, Marc Erich Latoschik,
Decoupling the Entity-Component-System Pattern using Semantic Traits for Reusable Realtime Interactive Systems
, In
9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
2015.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Wiebusch:2015aa,
title = {Decoupling the Entity-Component-System Pattern using Semantic Traits for Reusable Realtime Interactive Systems},
author = {Wiebusch, Dennis and Latoschik, Marc Erich},
booktitle = {9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2015},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-ieeevr-searis-semantic-traits.pdf}
}
Abstract: The Entity-Component-System (ECS) pattern has become a major design pattern used in modern architectures for Real-Time Interactive System (RIS) frameworks. The pattern decouples different aspects of a simulation like graphics, physics, or AI vertically. Its main purpose is to separate algorithms, provided by high-level tailored modules or engines, from the object structure of the low-level entities simulated by those engines. In this context, it retains advantages of object-oriented programming (OOP) like encapsulation and access control. Still, the OOP paradigm introduces coupling when it comes to the low-level implementation details, thus negatively affecting reusability of such systems.
To address these issues we propose a semantics-based approach which facilitates to escape the rigid structures imposed by OOP. Our approach introduces the concept of semantic traits, which enable retrospective classification of entities. The utilization of semantic traits facilitates reuse in the context of ECS-based systems by further decoupling objects from their class definition. The applicability of the approach is validated by examples from a prototypical integration into a recently developed RIS.
Daniel Roth, Marc Erich Latoschik, Kai Vogeley, Gary Bente,
Hybrid Avatar-Agent Technology - A Conceptual Step Towards Mediated "Social" Virtual Reality and its Respective Challenges
, In
i-com Journal of Interactive Media
, Vol.
14
(
2)
, pp. 107-114
.
2015.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{roth2015hybrid,
title = {Hybrid Avatar-Agent Technology - A Conceptual Step Towards Mediated "Social" Virtual Reality and its Respective Challenges},
author = {Roth, Daniel and Latoschik, Marc Erich and Vogeley, Kai and Bente, Gary},
journal = {i-com Journal of Interactive Media},
year = {2015},
volume = {14},
number = {2},
pages = {107-114},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-roth-haat.pdf},
doi = {https://doi.org/10.1515/icom-2015-0030}
}
Abstract: Driven by large industry investments, developments of Virtual Reality (VR) technologies including unobtrusive sensors, actuators and novel display devices are rapidly progressing. Realism and interactivity have been postulated as crucial aspects of immersive VR since the naissance of the concept. However, today\u0027s VR still falls short from creating real life-like experiences in many regards. This holds particularly true when introducing the "social dimension" into the virtual worlds. Apparently, creating convincing virtual selves and virtual others and conveying meaningful and appropriate social behavior still is an open challenge for future VR. This challenge implies both, technical aspects, such as the real-time capacities of the systems, but also psychological aspects, such as the dynamics of human communication. Our knowledge of VR systems is still fragmented with regard to social cognition, although the social dimension is crucial when aiming at autonomous agents with a certain social background intelligence. It can be questioned though whether a perfect copy of real life interactions is a realistic or even meaningful goal of social VR development at this stage. Taking into consideration the specific strengths and weaknesses of humans and machines, we propose a conceptual turn in social VR which focuses on what we call "hybrid avatar-agent systems". Such systems are required to generate i) avatar mediated interactions between real humans, taking advantage of their social intuitions and flexible communicative skills and ii) an artificial social intelligence (AIS) which monitors, and potentially moderates or transforms manipulations of behavior in intercultural conversations. The current article sketches a respective base architecture and discusses necessary research prospects and challenges as a starting point for future research and development.
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa (Eds.),
IEEE 8th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE digital library
, 2015.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{latoschik2015software,
title = {IEEE 8th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo},
year = {2015},
publisher = {IEEE digital library},
url = {http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7847760}
}
Abstract: SEARIS provides a forum for researchers and practitioners working on the design, development, and support of realtime interactive systems (RIS). These systems span from Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) environments to novel Human-Computer Interaction systems (such as multimodal or multitouch architectures) and entertainment applications in general. Their common principle is a strong user centric orientation which requires real-time processing of simulation aspects as well as input/output events according to perceptual constraints. Therefore, we encourage researchers and developers of real-time human computer interaction systems of all flavors to share their experiences and learn from each other during this workshop.
Jean-Luc Lugrin, Maximilian Wiedemann, Daniel Bieberstein, Marc Erich Latoschik,
Influence of Avatar Realism on Stressfull Situation in VR
, In
Proceedings of the IEEE VR 2015
, pp. 227-228
.
2015.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2015influence,
title = {Influence of Avatar Realism on Stressfull Situation in VR},
author = {Lugrin, Jean-Luc and Wiedemann, Maximilian and Bieberstein, Daniel and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE VR 2015},
year = {2015},
pages = {227-228},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-ieeevr-avatar-and-stressfull-situation.pdf}
}
Daniel Roth, Carola Bloch, Anne-Kathrin Wilbers, Kai Kaspar, Marc Erich Latoschik, Gary Bente,
Quantification of Signal Carriers for Emotion Recognition from Body Movement and Facial Affects
, In
Abstracts of the 18th European Conference on Eye Movements
, In
Journal of Eye Movement Research
U. Ansorge, T. Ditye, A. Florack, H. Leder (Eds.),
, Vol.
8
(
4)
, p. 192
.
2015.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{roth:2015b,
title = {Quantification of Signal Carriers for Emotion Recognition from Body Movement and Facial Affects},
author = {Roth, Daniel and Bloch, Carola and Wilbers, Anne-Kathrin and Kaspar, Kai and Latoschik, Marc Erich and Bente, Gary},
editor = {Ansorge, U. and Ditye, T. and Florack, A. and Leder, H.},
booktitle = {Abstracts of the 18th European Conference on Eye Movements},
journal = {Journal of Eye Movement Research},
year = {2015},
volume = {8},
number = {4},
pages = {192},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2015-ecem-roth-quantification-signal-carriers.pdf}
}
Abstract: Nonverbal expressions of emotions in the human face and body play an important role in social interactions. Regarding affective human-computer interfaces and the use of nonver- bal cues in computer-mediated communication, knowledge about the contribution of differ- ent nonverbal channels to emotion recognition is essential. The current study analyzed the relative impact of emotional expressions in faces and bodies of avatars on visual attention and emotion recognition. Avatar animations of expressive body movements were based on motion capture data from actors portraying either anger or happiness. A pre study was conducted to select expressions which were either perceived as anger, happiness or neu- tral. We systematically combined facial expressions and expressive movements by using blend shapes for facial animation resulting in congruent and incongruent face-body com- binations. 68 participants watched the resulting videos and rated the perceived emotion. During stimulus presentation gaze behavior was recorded with a SMI RED500 eye tracker. Using dynamic regions of interest we calculated dwell times and number of fixations in the face, head and body region. Results indicate that humans prioritize the face and head region as an information source emotion recognition. This priority is visible in the visual attention patterns as well as in the explained judgment variance.
Marc Erich Latoschik,
Szene Trends: Interviews mit Experten für Java und Sicherheit im Flugzeug
, In
JavaSPEKTRUM
(
6)
.
2015.
[BibTeX]
[Download]
[BibSonomy]
@periodical{latoschik2015szene,
title = {Szene Trends: Interviews mit Experten für Java und Sicherheit im Flugzeug},
author = {Latoschik, Marc Erich},
journal = {JavaSPEKTRUM},
year = {2015},
number = {6},
url = {}
}
2014
Dennis Wiebusch, Marc Erich Latoschik,
A Uniform Semantic-based Access Model for Realtime Interactive Systems
, In
IEEE VR Workshop on Software Engineering and Architectures for Realtime Interactive Systems
.
2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Wiebusch:2014aa,
title = {A Uniform Semantic-based Access Model for Realtime Interactive Systems},
author = {Wiebusch, Dennis and Latoschik, Marc Erich},
booktitle = {IEEE VR Workshop on Software Engineering and Architectures for Realtime Interactive Systems},
year = {2014},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-ieeevr-searis-uniform-access-model.pdf}
}
Abstract: This research presents a uniform semantic simulation state representation and access model for realtime interactive systems (RIS) in the field of Virtual, Augmented, and Mixed Reality. The role of this model is to provide a uniform interface to a centralized virtual world state, and simple mechanisms to manage all simulation components acting on it. It addresses the low maintainability and reusability of the traditional non-uniform world access schemes. The proposed model is based on two fundamental requirements: sharing a common simulation state and updating it via events. The state is structured around an entity-model, which is combined with a central registry that provides symbol-based semantic access.
Marc Erich Latoschik, Martin Fischbach,
Engineering Variance: Software Techniques for Scalable, Customizable, and Reusable Multimodal Processing
, In
Proceedings of the HCI International Conference 2014
, pp. 308-319
.
Springer
, 2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik2014engineering,
title = {Engineering Variance: Software Techniques for Scalable, Customizable, and Reusable Multimodal Processing},
author = {Latoschik, Marc Erich and Fischbach, Martin},
booktitle = {Proceedings of the HCI International Conference 2014},
year = {2014},
pages = {308-319},
publisher = {Springer},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-hcii-engineering-varicane-latoschik-fischbach.pdf}
}
Abstract: This article describes four software techniques to enhance the overall quality of
multimodal processing software and to include concurrency and variance due to individual characteristics and cultural context. First, the processing steps are decentralized and distributed using the actor model. Second, functor objects decouple domain- and application-specific operations from universal processing methods. Third, domain specific languages are provided inside of specialized feature processing units to define necessary algorithms in a human-readable and comprehensible format. Fourth, constituents of the DSLs (including the functors) are semantically grounded into a common ontology supporting syntactic and semantic correctness checks as well as code-generation capabilities.
These techniques provide scalable, customizable, and reusable technical solutions for reoccurring multimodal processing tasks.
Martin Fischbach, Chris Zimmerer, Anke Giebler-Schubert, Marc Erich Latoschik,
Exploring multimodal interaction techniques for a mixed reality digital surface (demo)
, In
IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
, pp. 335-336
.
2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2014exploring,
title = {Exploring multimodal interaction techniques for a mixed reality digital surface (demo)},
author = {Fischbach, Martin and Zimmerer, Chris and Giebler-Schubert, Anke and Latoschik, Marc Erich},
booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
year = {2014},
pages = {335-336},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-ismar-fischbach-xroads-draft.pdf}
}
Abstract: Quest - XRoads is a multimodal and multimedia mixed reality version of the traditional role-play tabletop game Quest: Zeit der Helden. The original game concept is augmented with virtual content, controllable via auditory, tangible and spatial interfaces to permit a novel gaming experience and to increase the satisfaction while playing. The demonstration consists of a turn-based skirmish, where up to four players have to collaborate to defeat an opposing player. In order to be victorious, players have to control heroes or villains and use their abilities via speech, gesture, touch as well as tangible interactions.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Fusion of mixed reality tabletop and location-based applications for pervasive games
, In
Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces
, pp. 427-430
.
ACM
, 2014.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{zimmerer2014fusion,
title = {Fusion of mixed reality tabletop and location-based applications for pervasive games},
author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich},
booktitle = {Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces},
year = {2014},
pages = {427-430},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N11771}
}
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa (Eds.),
IEEE 7th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE digital library
, 2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{latoschik2014software,
title = {IEEE 7th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo},
year = {2014},
publisher = {IEEE digital library},
url = {http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7145506}
}
Abstract: SEARIS provides a forum for researchers and practitioners working on the design, development, and support of realtime interactive systems (RIS). These systems span from Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) environments to novel Human-Computer Interaction systems (such as multimodal or multitouch architectures) and entertainment applications in general. Their common principle is a strong user centric orientation which requires real-time processing of simulation aspects as well as input/output events according to perceptual constraints. Therefore, we encourage researchers and developers of real-time human computer interaction systems of all flavors to share their experiences and learn from each other during this workshop.
Marc Erich Latoschik, Wolfgang Stürzlinger,
On the Art of the Evaluation and Presentation of RIS-Engineering
, In
7th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE Computer Society
, 2014.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{latoschik:2014c,
title = {On the Art of the Evaluation and Presentation of RIS-Engineering},
author = {Latoschik, Marc Erich and Stürzlinger, Wolfgang},
booktitle = {7th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
year = {2014},
publisher = {IEEE Computer Society},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-ieee-vr-searis-art-of-ris-engineering-preprint.pdf}
}
Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik, Michael Fendt,
Picture-based Localisation For Pervasive Gaming
, In
Virtuelle und Erweiterte Realität, 11. Workshop der GI-Fachgruppe VR/AR
.
2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2014picturebased,
title = {Picture-based Localisation For Pervasive Gaming},
author = {Fischbach, Martin and Lugrin, Jean-Luc and Latoschik, Marc Erich and Fendt, Michael},
booktitle = {Virtuelle und Erweiterte Realität, 11. Workshop der GI-Fachgruppe VR/AR},
year = {2014},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-vrar-fischbach-pricture-based-draft.pdf}
}
Abstract: Localisation, i.e. determining the position of the user(s) or devices, constitutes the key requirement for almost all types of mobile pervasive games. However, current marker-based localisation systems, e.g.\ based on QR Markers, present drawbacks that limit game deployment, scalability and maintainability. In this paper, we propose an alternative to solve these issues and introduce the first steps towards its full realisation. Our approach relies on markerless picture matching using a natural feature detection algorithm. Players reproduce a camera shot of a real-world site in order to confirm their presence, and progress further in the game. One of the game critical requirements is to provide accurate recognition while preserving application responsiveness with a large range of mobile devices and camera resolutions. We developed a proof-of-concept system and determined the best picture resolutions and feature numbers necessary to preserve both accurracy and responsiveness on diverse mobile devices. Our first results demonstrate the feasibility to achieve precise recognition within real-time constraints. We believe such localisation system have the potential to considerably facilitate pervasive game authoring while promoting new type of game mechanics.
Stephan Rehfeld, Henrik Tramberend, Marc Erich Latoschik,
Profiling and benchmarking event- and message-passing-based asynchronous Realtime Interactive Systems
, In
Proceeding of the 20th Symposium on Virtual Reality Software and Technology, VRST
, pp. 151-159
.
2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{rehfeld:2014b,
title = {Profiling and benchmarking event- and message-passing-based asynchronous Realtime Interactive Systems},
author = {Rehfeld, Stephan and Tramberend, Henrik and Latoschik, Marc Erich},
booktitle = {Proceeding of the 20th Symposium on Virtual Reality Software and Technology, VRST},
year = {2014},
pages = {151-159},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2014-rehfeld-profiling-and-benchmarking.pdf}
}
Abstract: This article describes a set of metrics for a message-passing-based asynchronous Realtime Interactive System (RIS). Current trends in concurrent RISs are analyzed, several profiling tools are outlined, and common metrics are identified. A set of nine metrics is pre- sented in a unified and formalized way. The implementation of a profiler that measures and calculates these metrics is illustrated. The implementation of an instrumentation and a visualization tool are described. A case study shows how this approach proved beneficial during the optimization of latency of an actual system.
Marc Erich Latoschik,
Smart Graphics/Intelligent Graphics
, In
Informatik-Spektrum
, Vol.
37
(
1)
, pp. 36-41
.
Springer
, 2014.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{latoschik2013smart,
title = {Smart Graphics/Intelligent Graphics},
author = {Latoschik, Marc Erich},
journal = {Informatik-Spektrum},
year = {2014},
volume = {37},
number = {1},
pages = {36-41},
publisher = {Springer},
url = {http://dx.doi.org/10.1007/s00287-013-0759-z}
}
Abstract: "Intelligent Graphics is about visually representing the world and visually representing our ideas. Artificial intelligence is about symbolically representing the world, and symbolically representing our ideas. And between the visual and the symbolic, between the concrete and the abstract, there should be no boundary." (Henry Lieberman). Liebermans Zitat beschreibt ein informatisches Paradigma, dessen zentrale Idee auf der Verknüpfung von Verfahren der Künstlichen Intelligenz (KI) mit denen der Computergrafik (CG) beruht: Das Schlagwort Smart Graphics oder synonym Intelligent Graphics bezeichnet heute unterschiedlichste Anwendungsszenarien. Diese reichen von der intelligenten und kontextsensitiven Anordnung grafischer Elemente in 2D-Desktopsystemen bis hin zu sprachgestischen Schnittstellen oder intelligenten Agenten in virtuellen Umgebungen als Assistenten der Benutzer. Allen diesen Ansätzen ist gemein, dass eine grafische Mensch-Computer-Schnittstelle mithilfe von KI-Verfahren an die kognitiven Eigenschaften des Benutzers angepasst wird, um die Bedienung zu verbessern.
2013
Stephan Rehfeld, Henrik Tramberend, Marc Erich Latoschik,
An actor-based distribution model for Realtime Interactive Systems
, In
Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2013 6th Workshop on
, pp. 9-16
.
2013.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{rehfeld:2013a,
title = {An actor-based distribution model for Realtime Interactive Systems},
author = {Rehfeld, Stephan and Tramberend, Henrik and Latoschik, Marc Erich},
booktitle = {Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2013 6th Workshop on},
year = {2013},
pages = {9-16},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2013-rehfeld-actor-based-distribution.pdf},
doi = {10.1109/SEARIS.2013.6798103}
}
Abstract: This article illustrates the design and development of distribution properties for Realtime Interactive Systems (RIS). The approach is based on the actor model for concurrent computation. The actor model provides a unified API for intra-node as well as for inter-node distribution and strongly facilitates the development of concurrent applications. Several benchmarks analyze vital performance properties to support the design decisions taken. The benchmarks describe typical setups found in RIS-applications, i.e., distributed rendering for large screen and tiled displays in immersive VR setups. Actual and potential performance impacts caused by the actor middleware are analyzed and identified and alternative solutions to overcome these impacts are provided.
Sebastian Oberdörfer, Marc Erich Latoschik,
Develop your strengths by gaming: Towards an inventory of gamificationable skills
, In
Proccedings der INFORMATIK 2013, 43. Jahrestagung der Gesellschaft für Informatik, Workshop Virtuelle Welten und Gamification
Matthias Horbach (Ed.),
, Vol.
P-220
, pp. 2346-2357
.
Gesellschaft für Informatik e.V.
, 2013.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{oberdorfer:2013a,
title = {Develop your strengths by gaming: Towards an inventory of gamificationable skills},
author = {Oberdörfer, Sebastian and Latoschik, Marc Erich},
editor = {Horbach, Matthias},
booktitle = {Proccedings der INFORMATIK 2013, 43. Jahrestagung der Gesellschaft für Informatik, Workshop Virtuelle Welten und Gamification},
year = {2013},
volume = {P-220},
pages = {2346-2357},
publisher = {Gesellschaft für Informatik e.V.},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2013-develop-your-strengths-by-gaming.pdf}
}
Abstract: This paper analyses existing gamification approaches to build a mapping between game genres and potential human skills required by and potentially trained by the specific genres. This mapping is then applied during an expert review of two typical game scenarios: an action- and reaction-oriented mini game and a collaborative group raid implemented in World of Warcraft. Both scenarios undergo an individual and detailed analysis to identify specific skill-related aspects. Relevant aspects characterizing each type are listed as a basis for a skill-mapping based on specific game mechanics utilized by each type. That is, the identified specific game mechanics require gaming skills which are then mapped to general physiological as well as cognitive and social human skills. This detailed game-mechanics-based skill-mapping is a first step towards a gamification index. Used in reverse order, from human skills to game mechanics, such an index will support the design of edutainment applications using gamification as a means to enhance skills required in real-world scenarios. The article concludes with a description of future work in the area of gamified skills as motivated by the work presented here.
Anke Giebler-Schubert, Chris Zimmerer, Thomas Wedler, Martin Fischbach, Marc Erich Latoschik,
Ein digitales Tabletop-Rollenspiel für Mixed-Reality-Interaktionstechniken
, In
Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR
Marc Erich Latoschik, Oliver Staadt, Frank Steinicke (Eds.),
, pp. 181-184
.
Shaker Verlag
, 2013.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{gieblerschubert2013digitales,
title = {Ein digitales Tabletop-Rollenspiel für Mixed-Reality-Interaktionstechniken},
author = {Giebler-Schubert, Anke and Zimmerer, Chris and Wedler, Thomas and Fischbach, Martin and Latoschik, Marc Erich},
editor = {Latoschik, Marc Erich and Staadt, Oliver and Steinicke, Frank},
booktitle = {Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR},
year = {2013},
pages = {181-184},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2013-vrar-ein-digitales-tabletop-rollenspiel-fuer-mixed-reality-interaktionstechniken.pdf}
}
Abstract: Dieser Artikel beschreibt die digitale Umsetzung eines rollenspielbasierten Brettspiels zur Exploration neuer Interaktionstechniken. Als gemeinsame Mixed-Reality-Spielumgebung dient ein Multitouch-Tisch mit Objekterkennung für haptisch erfassbare
Spielelemente (Spielfiguren, Karten, ...). Das System ergänzt die realen Objekte mit multimedialen Informationen gemäß des aktuellen Spielgeschehens. Die Integration tragbarer Endgeräte über eine HTML5 -Schnittstelle ermöglicht private und individualisierte Interaktionsbereiche.
Das System vereint unterschiedliche Interaktionstechniken wie Touch-Eingabe
und Interaktion mit greifbaren Objekten, um den Zufriedenheitsgrad bei Interaktionen positiv
zu beeinflussen. Eine Pilotstudie mit rollenspielerfahrenen Benutzern prüft die Akzeptanz
der neuen Spiel- und Interaktionsmöglichkeiten.
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa (Eds.),
IEEE 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE digital library
, 2013.
[BibTeX]
[Download]
[BibSonomy]
@proceedings{latoschik2013software,
title = {IEEE 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo},
year = {2013},
publisher = {IEEE digital library},
url = {http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6784150}
}
Martin Fischbach, Maximilian Neff, Immanuel Pelzer, Jean-Luc Lugrin, Marc Erich Latoschik,
Input Device Adequacy for Multimodal and Bimanual Object Manipulation in Virtual Environments
, In
Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR
Marc Erich Latoschik, Oliver Staadt, Frank Steinicke (Eds.),
, pp. 145-156
.
Shaker Verlag
, 2013.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2013input,
title = {Input Device Adequacy for Multimodal and Bimanual Object Manipulation in Virtual Environments},
author = {Fischbach, Martin and Neff, Maximilian and Pelzer, Immanuel and Lugrin, Jean-Luc and Latoschik, Marc Erich},
editor = {Latoschik, Marc Erich and Staadt, Oliver and Steinicke, Frank},
booktitle = {Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR},
year = {2013},
pages = {145--156},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2013-vrar-input-device-adequacy.pdf}
}
Abstract: This article describes a benchmark for evaluating adequacy of input devices for bimanual direct interaction techniques typically found in VR/AR applications. The benchmark implements a puzzle-like scenario inspired from shape and color sorting games, in which the user manipulates the position, rotation and scale of 3D spheres. The continuous interactions are combined with multimodal state-change actions using either a button or speech interface. A follow-up usability study utilizes the benchmark to evaluate the performance of one professional and one consumer-grade tracking system for both state-changing interfaces.
The results reveal similar adequacy for both tracking systems under both multimodal conditions.
Jean-Luc Lugrin, Dennis Wiebusch, Marc Erich Latoschik, Alexander Strehler,
Usability benchmarks for motion tracking systems
, In
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
, pp. 49-58
.
ACM
, 2013.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lugrin2013usability,
title = {Usability benchmarks for motion tracking systems},
author = {Lugrin, Jean-Luc and Wiebusch, Dennis and Latoschik, Marc Erich and Strehler, Alexander},
booktitle = {Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology},
year = {2013},
pages = {49--58},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N00860}
}
Abstract: Precise, accurate, fast, and low-latency motion tracking is a core requirement of real-time human-computer interfaces. The choices of tracking systems for a particular set of 3D interaction techniques are manifold. Hence, guidance in this task is greatly beneficial. In this paper, we propose to establish a set of canonical and simple game-based benchmarks for a potentially standardised comparison of tracking systems. The benchmarks focus on usability scores given reoccurring interaction tasks without requiring potentially missing, incomplete, or complex latency or accuracy raw measurements. Our first two benchmarks evaluate three tracking systems regarding motion-parallax and 3D object manipulation techniques. Our usability comparisons confirmed an expected advantage of low-latency/high-accuracy systems, while they also demonstrated that certain tracking systems perform better than suggested by previous measurements of their raw performances. This indicates that our approach provides an adequate replacement and improvement over the pure comparison of technical specifications. We believe our benchmarks could benefit the research community by facilitating a usability-based comparison of motion tracking systems.
Marc Erich Latoschik, Oliver Staadt, Frank Steinicke (Eds.),
Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR
.
Shaker Verlag
, 2013.
[BibTeX]
[Download]
[BibSonomy]
@proceedings{latoschik2013virtuelle,
title = {Virtuelle und Erweiterte Realität, 10. Workshop der GI-Fachgruppe VR/AR},
editor = {Latoschik, Marc Erich and Staadt, Oliver and Steinicke, Frank},
year = {2013},
publisher = {Shaker Verlag},
url = {}
}
2012
Martin Fischbach, Christian Treffs, David Cyborra, Alexander Strehler, Thomas Wedler, Gerd Bruder, Andreas Pusch, Marc Erich Latoschik, Frank Steinicke,
A Mixed Reality Space for Tangible User Interaction
, In
Virtuelle und Erweiterte Realität - 9. Workshop der GI-Fachgruppe VR/AR
Christian Geiger, Jens Herder, Tom Vierjahn (Eds.),
, pp. 25-36
.
Shaker Verlag
, 2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2012mixed,
title = {A Mixed Reality Space for Tangible User Interaction},
author = {Fischbach, Martin and Treffs, Christian and Cyborra, David and Strehler, Alexander and Wedler, Thomas and Bruder, Gerd and Pusch, Andreas and Latoschik, Marc Erich and Steinicke, Frank},
editor = {Geiger, Christian and Herder, Jens and Vierjahn, Tom},
booktitle = {Virtuelle und Erweiterte Realität - 9. Workshop der GI-Fachgruppe VR/AR},
year = {2012},
pages = {25-36},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2012-vrar-a-mixed-reality-space-for-tangible-user-interaction.pdf}
}
Abstract: Recent developments in the field of semi-immersive display technologies pro- vide new possibilities for engaging users in interactive three-dimensional virtual environments (VEs). For instance, combining low-cost tracking systems (such as the Microsoft Kinect) and multi-touch interfaces enables inexpensive and easily maintainable interactive setups. The goal of this work is to bring together virtual as well as real objects on a stereoscopic multi- touch enabled tabletop surface. Therefore, we present a prototypical implementation of such a mixed reality (MR) space for tangible interaction by extending the smARTbox FLBS12. The smARTbox is a responsive touch-enabled stereoscopic out-of-the-box system that is able to track users and objects above as well as on the surface. We describe the prototypical hard- and software setup which extends this setup to a MR space, and highlight design challenges for the several application examples.
M.E. Latoschik, H. Tramberend,
A scala-based actor-entity architecture for intelligent interactive simulations
, In
Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2012 5th Workshop on
, pp. 9-17
.
2012.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{6231175,
title = {A scala-based actor-entity architecture for intelligent interactive simulations},
author = {Latoschik, M.E. and Tramberend, H.},
booktitle = {Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2012 5th Workshop on},
year = {2012},
pages = {9-17},
url = {},
doi = {10.1109/SEARIS.2012.6231175}
}
Martin Fischbach, Dennis Wiebusch, Marc Erich Latoschik, Gerd Bruder, Frank Steinicke,
Blending Real and Virtual Worlds Using Self-reflection and Fiducials.
, In
Proceedings of the 11th international conference on Entertainment Computing
Marc Herrlich, Rainer Malaka, Maic Masuch (Eds.),
, pp. 465-468
.
Berlin, Heidelberg
:
Springer-Verlag
, 2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2012blending,
title = {Blending Real and Virtual Worlds Using Self-reflection and Fiducials.},
author = {Fischbach, Martin and Wiebusch, Dennis and Latoschik, Marc Erich and Bruder, Gerd and Steinicke, Frank},
editor = {Herrlich, Marc and Malaka, Rainer and Masuch, Maic},
booktitle = {Proceedings of the 11th international conference on Entertainment Computing},
year = {2012},
pages = {465--468},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2012-icec-blending-real-and-virtual-worlds.pdf}
}
Abstract: This paper presents an enhanced version of a portable out-of-the-box platform for semi-immersive interactive applications. The enhanced version combines stereoscopic visualization, markerless user tracking, and multi-touch with self-reflection of users and tangible object inter- action. A virtual fish tank simulation demonstrates how real and virtual worlds are seamlessly blended by providing a multi-modal interaction ex- perience that utilizes a user-centric projection, body, and object tracking, as well as a consistent integration of physical and virtual properties like appearance and causality into a mixed real/virtual world.
Dennis Wiebusch, Marc Erich Latoschik,
Enhanced Decoupling of Components in Intelligent Realtime Interactive Systems using Ontologies
, In
Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), proceedings of the IEEE Virtual Reality 2012 workshop
.
2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Wiebusch:2012,
title = {Enhanced Decoupling of Components in Intelligent Realtime Interactive Systems using Ontologies},
author = {Wiebusch, Dennis and Latoschik, Marc Erich},
booktitle = {Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), proceedings of the IEEE Virtual Reality 2012 workshop},
year = {2012},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2012-ieeevr-searis-enhanced-decoupling.pdf}
}
Abstract: We introduce a technique to support decoupling in component-based, modular software architectures as a means to enhance non-functional requirements, i.e., to increase reusability, portability, and adaptability. The core idea utilizes a semantic description of interfaces and component interplay in the area of Intelligent Real-time Interactive Systems (IRIS). Semantic descriptions are encoded as OWL-based models, which build a Knowledge Representation Layer (KRL) of relevant interface constructs and component features. These models are automatically transformed into programming language code of a given target language. The result of that transformation forms a semantically grounded database of relevant system aspects that programmers can use to develop their application. Examples, taken from an application that was developed with the Simulator X framework, illustrate the different aspects of the proposed method and demonstrate its practicability.
Dennis Wiebusch, Martin Fischbach, Marc Erich Latoschik, Henrik Tramberend,
Evaluating scala, actors, & ontologies for intelligent realtime interactive systems.
, In
Proceedings of the 18th ACM symposium on Virtual reality software and technology
Mark Green, Wolfgang Stuerzlinger, Marc Erich Latoschik, Bill Kapralos (Eds.),
, pp. 153-160
.
ACM
, 2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{conf/vrst/WiebuschFLT12,
title = {Evaluating scala, actors, & ontologies for intelligent realtime interactive systems.},
author = {Wiebusch, Dennis and Fischbach, Martin and Latoschik, Marc Erich and Tramberend, Henrik},
editor = {Green, Mark and Stuerzlinger, Wolfgang and Latoschik, Marc Erich and Kapralos, Bill},
booktitle = {Proceedings of the 18th ACM symposium on Virtual reality software and technology},
year = {2012},
pages = {153--160},
publisher = {ACM},
url = {http://dl.acm.org/authorize?N24638}
}
Abstract: This article evaluates the utility of three technical design approaches implemented during the development of a Realtime Interactive Systems (RIS) architecture focusing on the areas of Virtual and Augmented Reality (VR and AR), Robotics, and Human-Computer Interaction (HCI). The design decisions are (1) the choice of the Scala programming language, (2) the implementation of the actor computational model, and (3) the central incorporation of ontologies as a base for semantic modeling, required for several Artificial Intelligence (AI) methods. A white-box expert review is applied to a detailed use case illustrating an interactive and multimodal game scenario, which requires a number of complex functional features like speech and gesture processing and instruction mapping. The review matches the three design decisions against three comprehensive non-functional requirements from software engineering: Reusability, scalability, and extensibility. The qualitative evaluation is condensed to a semi-quantitative summary, pointing out the benefits of the chosen technical design.
Dennis Wiebusch, Martin Fischbach, Alexander Strehler, Marc Erich Latoschik, Gerd Bruder, Frank Steinicke,
Evaluation von Headtracking in interaktiven virtuellen Umgebungen auf Basis der Kinect
, In
Virtuelle und Erweiterte Realität - 9. Workshop der GI-Fachgruppe VR/AR
Christian Geiger, Jens Herder, Tom Vierjahn (Eds.),
, pp. 189-200
.
Shaker Verlag
, 2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wiebusch2012evaluation,
title = {Evaluation von Headtracking in interaktiven virtuellen Umgebungen auf Basis der Kinect},
author = {Wiebusch, Dennis and Fischbach, Martin and Strehler, Alexander and Latoschik, Marc Erich and Bruder, Gerd and Steinicke, Frank},
editor = {Geiger, Christian and Herder, Jens and Vierjahn, Tom},
booktitle = {Virtuelle und Erweiterte Realität - 9. Workshop der GI-Fachgruppe VR/AR},
year = {2012},
pages = {189-200},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2012-vrar-evaluation-von-headtracking.pdf}
}
Abstract: Interaktive Medien mit räumlicher Darstellung virtueller Inhalte finden verstärkt Einzug in verschiedene Anwendungsfelder, was zu einer stetig wachsenden Nachfrage für kostengünstige Verfahren zur Bestimmung der Kopfposition eines Betrachters führt. In diesem Beitrag evaluieren wir zwei Headtrackingverfahren in interaktiven virtuellen Umgebungen auf Basis der Microsoft Kinect. Wir vergleichen die beiden Verfahren mit einem professionellen optischen Trackingsystem und zeigen Vor- und Nachteile auf.
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa (Eds.),
IEEE 5th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
IEEE digital library
, 2012.
[BibTeX]
[Download]
[BibSonomy]
@proceedings{latoschik2012software,
title = {IEEE 5th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo},
year = {2012},
publisher = {IEEE digital library},
url = {http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6222072}
}
Martin Fischbach, Dennis Wiebusch, Marc Erich Latoschik, Gerd Bruder, Frank Steinicke,
smARTbox A Portable Setup for Intelligent Interactive Applications.
, In
Mensch & Computer Workshopband
Harald Reiterer, Oliver Deussen (Eds.),
, pp. 521-524
.
Oldenbourg Verlag
, 2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{conf/mc/FischbachWLBS12,
title = {smARTbox A Portable Setup for Intelligent Interactive Applications.},
author = {Fischbach, Martin and Wiebusch, Dennis and Latoschik, Marc Erich and Bruder, Gerd and Steinicke, Frank},
editor = {Reiterer, Harald and Deussen, Oliver},
booktitle = {Mensch & Computer Workshopband},
year = {2012},
pages = {521-524},
publisher = {Oldenbourg Verlag},
url = {http://dblp.uni-trier.de/db/conf/mc/mc2012w.html#FischbachWLBS12}
}
Abstract: This paper presents a semi-immersive, multimodal fish tank simulation realized using the smARTbox, an out-of-the-box platform for intelligent interactive applications. The smARTbox provides portability, stereoscopic visualization, marker-less user tracking and direct interscopic touch input. Off-the-shelf hardware is combined with a state-of-the-art simulation platform to assemble a powerful system environment, which facilitates direct (touch) and indirect (movement) interaction.
Martin Fischbach, Marc Erich Latoschik, Gerd Bruder, Frank Steinicke,
smARTbox: Out-of-the-Box Technologies for Interactive Art and Exhibition
, In
Proceedings of the 2012 Virtual Reality International Conference
Simon Richir (Ed.),
, p. 19
.
ACM
, 2012.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach2012smartbox,
title = {smARTbox: Out-of-the-Box Technologies for Interactive Art and Exhibition},
author = {Fischbach, Martin and Latoschik, Marc Erich and Bruder, Gerd and Steinicke, Frank},
editor = {Richir, Simon},
booktitle = {Proceedings of the 2012 Virtual Reality International Conference},
year = {2012},
pages = {19},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2012-acm-vric-fischbach-smartbox-out-of-the-box-technologies.pdf}
}
Abstract: Recent developments in the �elds of interactive display technologies
provide new possibilities for engaging visitors in interactive
three-dimensional virtual art exhibitions. Tracking
and interaction technologies such as the Microsoft Kinect
and emerging multi-touch interfaces enable inexpensive and
low-maintenance interactive art setups while providing portable
solutions for engaging presentations and exhibitions. In
this paper we describe the smARTbox, which is a responsive
touch-enabled stereoscopic out-of-the-box technology for interactive
art setups. Based on the described technologies,
we sketch an interactive semi-immersive virtual �sh tank
implementation that enables direct and indirect interaction
with visitors.
Marc Erich Latoschik, Steven Feiner, Dieter Schmalstieg, Carolina Cruz-Neira,
Systems Engineering Science: Obsolete or Essential?
, In
Proceedings of the IEEE Virtuel Reality 2012
.
2012.
Panel
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2012a,
title = {Systems Engineering Science: Obsolete or Essential?},
author = {Latoschik, Marc Erich and Feiner, Steven and Schmalstieg, Dieter and Cruz-Neira, Carolina},
booktitle = {Proceedings of the IEEE Virtuel Reality 2012},
year = {2012},
note = {Panel},
url = {}
}
Abstract: The engineering of systems plays a significant role in the exciting field of virtual and augmented reality (VR and AR). Expectations are constantly rising. State-of-the-art VR/AR-applications often depend on multiple simulation aspects, from multimodal input processing and output generation to intelligent behavior of entities or agents. All too often, such diverse aspects define their own set of highly heterogeneous requirements which call for alternative and novel engineering approaches. Fortunately, there are constant advancements in software technology, from system architectures, design patterns, to programming paradigms or even programming languages which can either be applied to, or developed, refined, or put into practice in the VR/AR-domain for mutual benefits. However, despite the importance of such technological advancements, there seems to be a decreased interest in the engineering science over the last years. This panel counters this trend by openly addressing current and future challenges of the science of software engineering and system development in the field of VR and AR, asking:
What is the significance of software engineering and programming developments for our field?
What are important software engineering and programming developments to take into account?
How will we integrate the state-of-the-art of the engineering sciences continuously?
How will we value, i.e., publish, review, and rate technically oriented material?
2011
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa, Raimund Dachselt (Eds.),
IEEE 4th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
Shaker Verlag
, 2011.
[BibTeX]
[Download]
[BibSonomy]
@proceedings{noauthororeditor2011software,
title = {IEEE 4th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo and Dachselt, Raimund},
year = {2011},
publisher = {Shaker Verlag},
url = {}
}
Marc Erich Latoschik, Henrik Tramberend,
Simulator X: A Scalable and Concurrent Software Platform for Intelligent Realtime Interactive Systems
, In
2011 IEEE Virtual Reality Conference
, pp. 171-174
.
2011.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2011,
title = {Simulator X: A Scalable and Concurrent Software Platform for Intelligent Realtime Interactive Systems},
author = {Latoschik, Marc Erich and Tramberend, Henrik},
booktitle = {2011 IEEE Virtual Reality Conference},
year = {2011},
pages = {171 - 174},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2011-simulator-x-latoschik.pdf}
}
Abstract: This article presents a platform for software technology research in the area of intelligent Realtime Interactive Systems. Simulator X is targeted at Virtual Reality, Augmented Reality, Mixed Reality, and computer games. It provides a foundation and testbed for a variety of different application models. The current research architecture is based on the actor model to support fine grained concurrency and parallelism. Its design follows the minimize coupling and maximize cohesion software engineering principle. A distributed world state and execution scheme is combined with an object-centered world view based on an entity model. Entities conceptually aggregate properties internally represented by state variables. An asynchronous event mechanism allows intra- and interprocess communication between the simulation actors. An extensible world interface uses an ontology-based semantic annotation layer to provide a coherent world view of the resulting distributed world state and execution scheme to application developers. The world interface greatly simplifies configurability and the semantic layer provides a solid foundation for the integration of different Artificial Intelligence components. The current architecture is implemented in Scala using the Java virtual machine. This choice additionally fosters low-level scalability, portability, and reusability.
Martin Fischbach, Dennis Wiebusch, Anke Giebler-Schubert, Marc Erich Latoschik, Stephan Rehfeld, Henrik Tramberend,
SiXton\u0027s curse - Simulator X demonstration
, In
Virtual Reality Conference (VR), 2011 IEEE
Michitaka Hirose, Benjamin Lok, Aditi Majumder, Dieter Schmalstieg (Eds.),
, pp. 255-256
.
2011.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fischbach:2011,
title = {SiXton\u0027s curse - Simulator X demonstration},
author = {Fischbach, Martin and Wiebusch, Dennis and Giebler-Schubert, Anke and Latoschik, Marc Erich and Rehfeld, Stephan and Tramberend, Henrik},
editor = {Hirose, Michitaka and Lok, Benjamin and Majumder, Aditi and Schmalstieg, Dieter},
booktitle = {Virtual Reality Conference (VR), 2011 IEEE},
year = {2011},
pages = {255-256},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2011-ieeevr-sixtons-curse-simulatorx-demonstration.pdf}
}
Abstract: We present SiXton’s Curse – a computer game – to illustrate the benefits of a novel simulation platform. Simulator X targets virtual, augmented, and mixed reality applications as well as computer games. The game simulates a medieval village called SiXton that can be explored and experienced using gestures and speech for input. SiXton’s Curse utilizes multiple independent components for physical simulation, sound and graphics rendering, artificial intelligence, as well as for multi-modal interaction (MMI). The components are already an integral part of Simulator X’s current version. Building on Hewitt’s actor model, the Simulator X platform enables the developer to easily exploit the capabilities of modern hardware architectures. A state variable concept is implemented on top of the actor model to grant uniform and easy access to global states and values by using the internal mechanisms of the actor model. Communication via an asynchronous messaging interface reduces component coupling. The scalability of the actor model provides a uniform concurrency paradigm on different levels of granularity as well as exchangeability of architectural elements and components.
2010
Stephan Rehfeld, Marc Erich Latoschik,
A Comparison of Parallelization Methods for Data Flow Networks
, In
Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), proceedings of the IEEE Virtual Reality 2010 workshop
.
2010.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Rehfeld:2010,
title = {A Comparison of Parallelization Methods for Data Flow Networks},
author = {Rehfeld, Stephan and Latoschik, Marc Erich},
booktitle = {Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), proceedings of the IEEE Virtual Reality 2010 workshop},
year = {2010},
url = {}
}
Abstract: This paper compares two parallelization methods for data flow networks. A strict decoupling between the networks\u0027 structural components and their evaluation is achieved using a pattern oriented architecture with generalized traverser objects. This architecture provides a clean basis for the implementation of parallel execution code using a) an explicit parallelization based on pthreads and b) an implicit parallelization using OpenMP. Both methods are then evaluated and compared to each other for different traversal heuristics.
Dennis Wiebusch, Marc Erich Latoschik, Henrik Tramberend,
Ein Konfigurierbares World-Interface zur Kopplung von KI-Methoden an Interaktive Echtzeitsysteme
, In
Virtuelle und Erweiterte Realität, 7. Workshop of the GI special interest group VR/AR
, pp. 47-58
.
Shaker Verlag
, 2010.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wiebusch:2010,
title = {Ein Konfigurierbares World-Interface zur Kopplung von KI-Methoden an Interaktive Echtzeitsysteme},
author = {Wiebusch, Dennis and Latoschik, Marc Erich and Tramberend, Henrik},
booktitle = {Virtuelle und Erweiterte Realität, 7. Workshop of the GI special interest group VR/AR},
year = {2010},
pages = {47--58},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2010-vrar-worldinterface.pdf}
}
Abstract: Dieser Artikel beschreibt eine konfigurierbare Schnittstelle -- ein so genanntes World-Interface -- für die Kopplung von KI-Verfahren in eine Middlewareplattform für interaktive Echtzeitsysteme der VR, AR und MR. Relevante Ereignisse, deren semantische Repräsentation sowie die entsprechenden Reaktionen auf die Ereignisse lassen sich zur Laufzeit definieren und für die jeweiligen Anforderungen konfigurieren. Dadurch wird das System auf verschiedene Anwendungskontexte und konkrete Middlewarefunktionalitäten anpassbar. Der Ansatz wird am Beispiel der Kopplung eines Regel-basierten Produktionssystems zur Entwicklung einer Anwendungslogik für ein kleines Computerspiel demonstriert.
Marc Erich Latoschik, Henrik Tramberend,
Engineering Realtime Interactive Systems: Coupling & Cohesion of Architecture Mechanisms
, In
Proceedings of the 16th Eurographics conference on Virtual Environments & Second Joint Virtual Reality
, pp. 25-28
.
2010.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2010,
title = {Engineering Realtime Interactive Systems: Coupling & Cohesion of Architecture Mechanisms},
author = {Latoschik, Marc Erich and Tramberend, Henrik},
booktitle = {Proceedings of the 16th Eurographics conference on Virtual Environments & Second Joint Virtual Reality},
year = {2010},
pages = {25--28},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2010-latoschik-ris-coupling-cohesion.pdf}
}
Abstract: This paper reviews coupling and cohesion as software quality criteria for the development of Realtime Interactive Systems (RIS). The applicability of these criteria to evaluate RIS architecture mechanisms is examined while the utilization of existing software metrics is discussed. Three commonly found mechanisms, scene graphs, event systems and entity models, are evaluated with respect to a minimization of coupling and a maximization of cohesion. The paper motivates an analytical approach to the evaluation of software techniques as well as a strengthening of software technology aspects in the field of interactive simulations in general given current challenges of diversification, parallelization, and interconnection.
Marc Erich Latoschik, Henrik Tramberend,
Guru Meditation: Kopplung & Kohäsion - Entwicklung interaktiver Graphiksysteme
, In
Augmented & Virtual Reality in der Produktentstehung, 9. Paderborner Workshop Augmented & Virtual Reality in der Produktentstehung
Jürgen Gausemeier, Michael Grafe (Eds.),
.
Heinz Nixdort MuseumsForum
, 2010.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{latoschik:2010a,
title = {Guru Meditation: Kopplung & Kohäsion - Entwicklung interaktiver Graphiksysteme},
author = {Latoschik, Marc Erich and Tramberend, Henrik},
editor = {Gausemeier, Jürgen and Grafe, Michael},
booktitle = {Augmented & Virtual Reality in der Produktentstehung, 9. Paderborner Workshop Augmented & Virtual Reality in der Produktentstehung},
year = {2010},
publisher = {Heinz Nixdort MuseumsForum},
url = {}
}
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa, Raimund Dachselt (Eds.),
IEEE 3rd Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
Shaker Verlag
, 2010.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{searis:2010,
title = {IEEE 3rd Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo and Dachselt, Raimund},
year = {2010},
publisher = {Shaker Verlag},
url = {}
}
Abstract: Welcome to SEARIS 2010! We are delighted to rerun this workshop at IEEE VR in Waltham, Massachusetts for the third time, really making it an annual event. We hope this continues to be a place for discussing the state of the art on Software Engineering in our field. Several approaches have been developed and utilized in the field of Real-time Interac-tive Systems (RIS) in the past two decades. Virtual, Augmented, Virtualized, in general Mixed Realities, as well as real-time simulation and computer games led to manifold in-spiring solutions for RIS developments in research and production. However, it is an ongoing challenge to identify and separate both, novel results and well known solutions in any new system. The goal of this workshop is to analyze and structure the current state-of-the-art in RIS software engineering and architectures. We want to identify common as well as novel paradigms, concepts, methods, and techniques that support technical developments required in this field. A unified presentation of systems will al-low us to support research and development in a more efficient way, and will provide a valuable source of information for future developments. The workshop series is an inte-grated attempt to address the complex issue of RIS development and to summarize the work our community is doing. SEARIS provides a forum for researchers and practitioners working on the design, de-velopment, and support of real-time interactive systems which span from VR, AR, and MR environments to novel Human-Computer-Interaction systems and entertainment applications. After successful SEARIS workshops in 2008 and 2009, the follow-up pro-ceeds to establish a sustainable community shaping a common understanding, deriving common paradigms, developing useful and necessary methods and techniques, and fos-tering new ideas. This year\u0027s proceedings contain 14 accepted contributions, which add to the ideas and discussions of the community from the past SEARIS Workshop\u0027s in 2008 and 2009. All contributions are also available online (http://www.searis.net). Various hot-topics have been identified from the current scientific discussion and have been presented and discussed in different sessions. The contributions could be grouped according to several aspects. In fact, it is one of the workshop\u0027s goals to identify such key aspects and many authors are shedding light onto several key issues. We grouped the papers into four main sections:
*Concepts, Methods and Techniques *Frameworks & Specific Architectures *Behavior & Dataflow *Distribution
The target audience for the SEARIS workshop series and its publications are researchers and developers from VR/AR as well as from technically close fields like ambient or per-vasive computing and - of course - the computer games community. We would like to thank all people who made this workshop a reality. First, to the work-shop chairs at IEEE VR for their support and willingness to accept our proposal. Next, to all people who submitted papers to this track, either accepted or not. They are the heart and soul of this workshop and the starting point to the discussion we would like to fos-ter. Finally, we also like to thank the attendees of the workshop for their active interest in this research area.
Marc Erich Latoschik (Universität Bayreuth, Germany) Dirk Reiners (University of Louisiana, Lafayette, USA) Roland Blach (CC Virtual Environments Fraunhofer IAO Stuttgart, Germany) Pablo Figueroa (Universidad de los Andes Bogota, Colombia) Raimund Dachselt (Otto von Guericke Universität Magdeburg, Germany)
Marc Erich Latoschik,
Intelligente Techniken für innovative Interaktionen
, In
Virtual Reality - Mensch und Maschine im Interaktiven Dialog
Michael Schenk (Ed.),
.
2010.
[BibTeX]
[Download]
[BibSonomy]
@article{latoschik:2010b,
title = {Intelligente Techniken für innovative Interaktionen},
author = {Latoschik, Marc Erich},
editor = {Schenk, Michael},
booktitle = {Virtual Reality - Mensch und Maschine im Interaktiven Dialog},
year = {2010},
url = {}
}
M. Slater, A. Steed, M.E. Latoschik, D. Reiners (Eds.),
PRESENCE journal special issue: Reflections on the Design and Implementation of Virtual Environment Systems
, Vol.
19
(
2)
.
MIT Press
, 2010.
[BibTeX]
[Download]
[BibSonomy]
@book{slater-etal:presence-searis:2010,
title = {PRESENCE journal special issue: Reflections on the Design and Implementation of Virtual Environment Systems},
editor = {Slater, M. and Steed, A. and Latoschik, M.E. and Reiners, D.},
year = {2010},
volume = {19},
number = {2},
publisher = {MIT Press},
url = {}
}
Jean-Luc Lugrin, Marc Cavazza,
Towards AR Game Engines
, In
3rd workshop on software engineering and architecture of realtime interactive systems SEARIS
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa, Raimund Dachselt (Eds.),
.
Shaker Verlag
, 2010.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lugrin2010argameengines,
title = {Towards AR Game Engines},
author = {Lugrin, Jean-Luc and Cavazza, Marc},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo and Dachselt, Raimund},
booktitle = {3rd workshop on software engineering and architecture of realtime interactive systems SEARIS},
year = {2010},
publisher = {Shaker Verlag},
url = {}
}
Lutz Lukas, Felix Schwägerl, Marc Erich Latoschik,
Unifikationsbasierte Sprach-Gesten Fusion für Multimodale VR/AR-Schnittstellen
, In
Virtuelle und Erweiterte Realität, 7. Workshop of the GI special interest group VR/AR
, pp. 145-156
.
Shaker Verlag
, 2010.
Best paper award 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lukas:2010,
title = {Unifikationsbasierte Sprach-Gesten Fusion für Multimodale VR/AR-Schnittstellen},
author = {Lukas, Lutz and Schwägerl, Felix and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 7. Workshop of the GI special interest group VR/AR},
year = {2010},
pages = {145--156},
publisher = {Shaker Verlag},
note = {Best paper award 🏆},
url = {}
}
Abstract: Dieser Artikel beschreibt ein System zur Umsetzung multimodaler Interaktionen in Systemen der Virtual, Augmented und Mixed Reality. Die Bewegungen eines Benutzers werden über ein Marker-gestütztes Infrarottrackingsystem einer Zeitreihen- und Featureanalyse unterworfen. Die folgende Gestenerkennung klassifiziert Bewegungsmuster über ein konnektionistisches Lernverfahren mittels neuronaler Netze. Neben der reinen Klassifikation werden relevante Bewegungsdaten korreliert, um eine spätere Auswertung der spatialen Gestenexpression zu ermöglichen. Die Fusion der gestischen und sprachlichen Eingaben verwendet einen Unifikationsansatz für multimodale Grammatiken. Für den Kontext interaktiver Systeme wurde der zugrundeliegende Chart-Parser um die Möglichkeit einer inkrementellen Verarbeitung erweitert. So genannte Featlets ermöglichen eine vereinheitlichte Verarbeitung der Sprach- und Gesteneinheiten innerhalb der Unifikation. Der Fusionsprozess verwendet dabei sowohl eine semantische wie auch eine temporale Zuordnung. Der atomare Unifikationsschritt bleibt universell, alternative Relationen ermöglichen ein variables Agreement. Die Eingabeverarbeitungskomponenten werden unter Benutzung des Actor-Models lose an eine VR/AR-Middleware gekoppelt.
2009
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa, Raimund Dachselt (Eds.),
IEEE 2nd Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
Shaker Verlag
, 2009.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{searis:2009,
title = {IEEE 2nd Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo and Dachselt, Raimund},
year = {2009},
publisher = {Shaker Verlag},
url = {}
}
Abstract: Several approaches have been developed and utilized in the field of Realtime Interactive Systems (RIS) in the past two decades. Virtual, Augmented, Virtualized, in general Mixed Realities, as well as realâ€time simulations and computer games led to manifold inspiring solutions for RIS developments in research and production. However, it is an ongoing challenge to identify and separate both novel results and well known solutions in any new system. The goal of this workshop is to analyze and structure the current state of the art in RIS software engineering and architectures. We want to identify common as well as novel paradigms, concepts, methods, and techniques that support technical de†velopments required in this field. A unified presentation of systems will allow us to sup†port research and development in a more efficient way, and will provide a valuable source of information for future developments. This workshop is our first integrated attempt to address the complex issue of RIS development and to summarize the work our community is doing.
SEARIS provides a forum for researchers and practitioners working on the design, de†velopment, and support of Realtime Interactive Systems which span from VR, AR, and MR environments to novel Humanâ€Computerâ€Interaction systems and entertainment applications. After a successful initial SEARIS workshop in 2008, this first followâ€up proceeds to establish a sustainable community shaping a common understanding, deriv†ing common paradigms, developing useful and necessary methods and techniques, and fostering new ideas. This year\u0027s workshop at IEEE VR 2009 has built on our previous experiences at SEARIS 2008 and fostered an interactive, discussionâ€like exchange format as opposed to rather traditional paper presentations. We have been delighted to be again part of the program of IEEE VR 2009 in Lafayette, Louisiana. The proceedings contain 11 accepted contributions, which add on to the ideas and discussions the community has collected during the first SEARIS Workshop in 2008 and are also available online at http://www.searis.net/. Various hot topics have been identified from the current scientific discussion and have been presented and discussed in different panels. The contributions could be grouped according to several aspects. In fact, it is one of the workshop\u0027s goals to identify such key aspects and many authors are shedding light onto several key issues.
We grouped the papers into 4 main sections: †Specific System Architectures †Modeling and Abstraction †Subsystems of RIS †Methodology and Patterns The target audience for the SEARIS workshop series and its publications are researchers and developers from VR/AR as well as from technically close fields like ambi†ent/pervasive computing and †of course †the computer games community. We would like to thank all people who made this workshop a reality. First, to the work†shop chairs at IEEE VR for their support and willingness to accept our proposal. Next, to all people who submitted papers to this track, either accepted or not. They are the heart and soul of this workshop and the starting point to the discussion we would like to fos†ter. Finally, to the attendees of the workshop, for their active interest in this proposal.
Marc Erich Latoschik (Universität Bayreuth, Germany) Dirk Reiners (University of Louisiana, Lafayette, USA) Roland Blach (CC Virtual Environments Fraunhofer IAO Stuttgart, Germany) Pablo Figueroa (Universidad de los Andes Bogota, Colombia) Raimund Dachselt (Otto von Guericke Universität Magdeburg, Germany)
Stephan Rehfeld, Marc Erich Latoschik,
Parallelisierung von Traverser-Operationen in Datenflussnetzen
, In
Virtuelle und Erweiterte Realität, 6. Workshop of the GI VR & AR special interest group
, pp. 73-89
.
Shaker Verlag
, 2009.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{rehfeld-latoschik:parallelisierung:2009,
title = {Parallelisierung von Traverser-Operationen in Datenflussnetzen},
author = {Rehfeld, Stephan and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 6. Workshop of the GI VR & AR special interest group},
year = {2009},
pages = {73-89},
publisher = {Shaker Verlag},
url = {}
}
Christian Fröhlich, Peter Biermann, Marc Erich Latoschik, Ipke Wachsmuth,
Processing iconic gestures in a multimodal virtual construction environment
, In
Gesture-Based Human-Computer Interaction and Simulation
(
5085)
, pp. 187-192
.
Springer
, 2009.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{froelichetal:iconic:2009,
title = {Processing iconic gestures in a multimodal virtual construction environment},
author = {Fröhlich, Christian and Biermann, Peter and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Gesture-Based Human-Computer Interaction and Simulation},
year = {2009},
number = {5085},
pages = {187-192},
publisher = {Springer},
url = {}
}
A. Gerndt, Marc Erich Latoschik (Eds.),
Virtuelle und Erweiterte Realität, 6. Workshop of the GI special interest group VR/AR
.
Shaker Verlag
, 2009.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{gerndt:ARVR:2009,
title = {Virtuelle und Erweiterte Realität, 6. Workshop of the GI special interest group VR/AR},
editor = {Gerndt, A. and Latoschik, Marc Erich},
year = {2009},
publisher = {Shaker Verlag},
url = {}
}
Abstract: Die Fachgruppe Virtuelle und Erweiterte Realität der Gesellschaft für Informatik wurde im Jahr 2003 als Informationsplattform und Interessengemeinschaft gegründet. Seit 2004 organisiert die Gruppe einen jährlich stattfindenden Workshop, der am 15. Juli 2007 an der Bauhaus-Universität Weimar zu Gast ist. Dieser Band enthält die 22 ausgewählten Beiträge. Das Themenspektrum reicht von Mensch-Maschine-Interation über Simulations-, Projektions- und Rendertechnologien zu VR/AR-Anwendungszenarien. Wir sind besonders froh über die aktive Beteiligung vieler Nachwuchsforscher und Studenten -- ein Indiz für die anhaltende Attraktivität und Aktualität der Zukunftstechnologien Virtual Reality und Augmented Reality. Besonderer Dank gilt dem Programmkomitee und den vielen Helfern bei der lokalen Organisation, ohne deren Unterstützung die Organisation und Durchführung dieser Veranstaltung nicht möglich gewesen wäre. Wir freuen uns auf eine interessante Veranstaltung mit lebhaften Diskussionen.
Christian Fröhlich, Marc Erich Latoschik, Ipke Wachsmuth,
Virtuelle Werkstatt -- Multimodale Interaktion für intelligente virtuelle Konstruktion
, In
8. Paderborner Workshop Augmented & Virtual Reality in der Produktentstehung
, pp. 241-255
.
Paderborn: HNI
, 2009.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{froelichetal:vwerk:2009,
title = {Virtuelle Werkstatt -- Multimodale Interaktion für intelligente virtuelle Konstruktion},
author = {Fröhlich, Christian and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {8. Paderborner Workshop Augmented & Virtual Reality in der Produktentstehung},
year = {2009},
pages = {241-255},
publisher = {Paderborn: HNI},
url = {}
}
2008
Thies Pfeiffer, Marc Erich Latoschik, Ipke Wachsmuth,
Conversational Pointing Gestures for Virtual Reality Interaction: Implications from an Empirical Study
, In
Proceedings of the IEEE VR 2008
, pp. 281-282
.
ACM
, 2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Pfeiffer:2008aa,
title = {Conversational Pointing Gestures for Virtual Reality Interaction: Implications from an Empirical Study},
author = {Pfeiffer, Thies and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Proceedings of the IEEE VR 2008},
year = {2008},
pages = {281-282},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/conversational-pointing-IEEEVR08.pdf}
}
Abstract: Interaction in conversational interfaces strongly relies on the sys-
tem\u0027s capability to interpret the user\u0027s references to objects via de-
ictic expressions. Deictic gestures, especially pointing gestures,
provide a powerful way of referring to objects and places, e.g.,
when communicating with an Embodied Conversational Agent in
a Virtual Reality Environment. We highlight results drawn from a
study on pointing and draw conclusions for the implementation of
pointing-based conversational interactions in partly immersive Vir-
tual Reality.
Thies Pfeiffer, Marc Erich Latoschik, Ipke Wachsmuth,
Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments
, In
Journal of Virtual Reality and Broadcasting
J. Herder (Ed.),
, Vol.
5
(
16)
.
2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{Pfeiffer:2009ab,
title = {Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments},
author = {Pfeiffer, Thies and Latoschik, Marc Erich and Wachsmuth, Ipke},
editor = {Herder, J.},
journal = {Journal of Virtual Reality and Broadcasting},
year = {2008},
volume = {5},
number = {16},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/evaluation-of-eyetrackers-JVRB08.pdf},
doi = {urn:nbn:de:0009-6-16605}
}
Abstract: Tracking user\u0027s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user\u0027s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Marc Erich Latoschik, Dirk Reiners, Roland Blach, Pablo Figueroa, Raimund Dachselt (Eds.),
IEEE 1st Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)
.
Shaker Verlag
, 2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{searis:2008,
title = {IEEE 1st Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS)},
editor = {Latoschik, Marc Erich and Reiners, Dirk and Blach, Roland and Figueroa, Pablo and Dachselt, Raimund},
year = {2008},
publisher = {Shaker Verlag},
url = {}
}
Abstract: Welcome to the first Workshop on Software Engineering and Architectures for Realtime Interactive Systems SEARIS. We are delighted to be part of the program of IEEE VR 2008, in Reno, Nevada. These proceedings contain the 15 accepted contributions, which we believe are thought provoking and representative of the current state of the art in designing Virtual and Augmented Reality systems. We are expecting SEARIS to become the premier vent to publish and discuss these systemsâ€related issues, as there is currently no other veenue for these topics.
In previous years, several researchers have organized related workshops and loosely arranged themselves as an unofficial interest group during past events. Their goal was to create a forum where researchers from all directions in the broad field of Virtual and Augmented Reality can contribute and debate their respective technical approaches. This work†mshop provides a faceâ€toâ€face opportunity to further support this emerging project. Several approaches have been developed and utilized in the field of Realtime Interactive Systems RIS. Virtual, Augmented, Virtualized, in general Mixed Realities, as well as real†time simulation and computer games led to manifold inspiring solutions for RIS developments in research and production. However, it is an ongoing challenge to identify and sepa†rate both novel results and well known solutions in any new system. The goal of this workshop is to analyze and structure the current stateâ€ofâ€theâ€art in RIS software engineering and architectures. We want to identify common as well as novel paradigms, concepts, methods, and techniques that support technical developments required in this field. A unified presentation of systems will allow us to support research and development in a more efficient way, and will provide a valuable source of information for future developments. This workshop is ur first integrated attempt to address the complex issue of RIS development and to sum†omarize the work our community is doing.
Arranging the contributions into sections of similar key aspects was a difficult process. Many contributions could be grouped according to several aspects. In fact, it is one of the workshop\u0027s goals to identify such key aspects and many authors are shedding light onto several ey issues. We apologize for any ambiguity here. In the end, we needed a good structure and ence we gr kh ouped the papers into 4 main sections:
* Systems. Six development systems are presented InTml, Lightning, FlowVR, OpenMASK VISTA, and MORGAN
* Abstraction Is †sues. Two papers address the issues of reusable VE platforms and alterna tives to scene graphs.
* Special Issues. In this section we collect papers related to the implementation of RIS in particular platforms, such as mobile systems, multiâ€rate systems, and Mixed Reality.
* Semantic and Dynamic Modeling. The four papers in this section show models for expli†citly describing semantic (i.e. ä chair is on the floor") and dynamic information.
* Semantic and Dynamic Modeling. The four papers in this section show models for expli†citly describing semantic i.e. ä chair is on the floor" and dynamic information.
Christian Fröhlich, Marc Erich Latoschik,
Incorporating the Actor Model into SCIVE on an Abstract Semantic Level
, In
Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), proceedings of the IEEE Virtual Reality 2008 workshop
, pp. 61-64
.
2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Frohlich:2008aa,
title = {Incorporating the Actor Model into SCIVE on an Abstract Semantic Level},
author = {Fröhlich, Christian and Latoschik, Marc Erich},
booktitle = {Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), proceedings of the IEEE Virtual Reality 2008 workshop},
year = {2008},
pages = {61-64},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/incorporating-actor-model-SEARIS08.pdf}
}
Abstract: This paper illustrates the temporal synchronization and process flow
facilities of a real-time simulation system which is based on the ac-
tor model. The requirements of such systems are compared against
the actor model\u0027s basic features and structures. The paper describes
how a modular architecture benefits from the actor model on the
module level and points out how the actor model enhances paral-
lelism and concurrency even down to the entity level. The actor
idea is incorporated into a novel simulation core for intelligent vir-
tual environments (SCIVE). SCIVE supports well-known and es-
tablished Virtual Reality paradigms like the scene graph metaphor,
field routing concepts etc. In addition, SCIVE provides an explicit
semantic representation of its internals as well as of the specific
virtual environments\u0027 contents. SCIVE uses a knowledge represen-
tation layer (KRL) to tie together the participating modules of a
simulation system and reflects this information between the mod-
ules and processes. As a consequence, the actor model based tem-
poral relations are lifted to the KRL which in turn is implemented
by a real-time tailored semantic net base formalism. The modules\u0027
process flow is henceforth described on the KRL. This high-level
description is extended down to the level of detailed function calls
between the modules. Functions, their parameters, and their return
values are reflected on the KRL. This provides an integrative se-
mantic description and interconnection layer uniformly accessible
a) for the incorporated technical modules and processes as well as
b) for the human designers and developers.
Bernhard Alexander Brüning, Marc Erich Latoschik, Ipke Wachsmuth,
Interaktives Motion-Capturing zur Echtzeitanimation virtueller Agenten
, In
Virtuelle und Erweiterte Realität, 5. Workshop of the GI VR & AR special interest group
, pp. 25-36
.
Shaker Verlag
, 2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Bruning:2008aa,
title = {Interaktives Motion-Capturing zur Echtzeitanimation virtueller Agenten},
author = {Brüning, Bernhard Alexander and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Virtuelle und Erweiterte Realität, 5. Workshop of the GI VR & AR special interest group},
year = {2008},
pages = {25-36},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/interaktives-mocap-GIVRAR08.pdf}
}
Abstract: Motion Capturing (MoCap) ermöglicht realistische Bewegungen virtueller Agenten.
Hier wird ein System für die optische Bewegungserfassung und GPU-beschleunigte
Bewegungssynthese vorgestellt. Das Verfahren berücksichtigt Clusterumgebungen heutiger VR
Systeme. Mittels eines optischen Trackingsystems werden die Bewegungen eines Benutzers in
einer immersiven virtuellen Umgebung auf Basis der Positions- und Ausrichtungsdaten einiger
Marker an signifikanten Körperpunkten erfasst. Die Übertragung auf einen virtuellen Agenten
erfolgt interaktiv und unterstützt die Anpassung unterschiedlicher Skelettproportionen.
Signifikante Teile der benötigten kinematischen Berechnungen auf Basis einer Denavit-
Hartenberg-Repräsentation sowie die für die Animation des Agenten notwendigen
Meshdeformationen werden dabei in Echtzeit über eine GPU-basierte (GPU-Graphical
Processing Unit) Implementierung der Algorithmen realisiert.
Anton Feldmann, Marc Erich Latoschik,
Methoden der Künstlichen Intelligenz für Computerspiele auf Basis semantischer Modelle interaktiver VR/AR-Systeme
, In
Virtuelle und Erweiterte Realität, 5. Workshop of the GI VR & AR special interest group
, pp. 113-124
.
Shaker Verlag
, 2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Feldmann:2008aa,
title = {Methoden der Künstlichen Intelligenz für Computerspiele auf Basis semantischer Modelle interaktiver VR/AR-Systeme},
author = {Feldmann, Anton and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 5. Workshop of the GI VR & AR special interest group},
year = {2008},
pages = {113-124},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/methoden-der-KI-in-spielen-GIVRAR08.pdf}
}
Abstract: Methoden der Künstlichen Intelligenz (KI) nehmen einen zentralen Stellenwert
in der Entwicklung heutiger Computerspiele ein. Eine explizite lose Koppelung isolierter
KI-Module mit ben¨otigten Modulen zur graphischen, physikalischen und auditiven Simula-
tion etc. wird zusehends komplex und stellt eine kritische Schnittstelle innerhalb interakti-
ver Systeme dar. Dieses Paper dokumentiert eine alternative eng gekoppelte Modellierung
einer integrierenden Architektur basierend auf einer generellen semantischen Abstraktions-
schicht. Ein semantisches Netz führt in der entworfenen Architektur die verschiedenartigen
Simulationsmodule zusammen und stellt die Grundlage für ein einheitliches Entwurfspara-
digma \u0027\u0027Semantic Reflection\u0027\u0027 dar. Dieses findet hier für die Realisierung verschiedener benö-
tigter KI-Verfahren im Computerspielekontext Anwendung. Am Beispiel der Pfadplanung,
der Entscheidungsfindung, der Verhaltenssteuerung, des Lernens sowie der Anbindung eines
Scriptinginferfaces werden die Vorteile dieses Ansatzes illustriert.
Marc Erich Latoschik, Roland Blach,
Semantic Modelling for Virtual Worlds -- A Novel Paradigm for Realtime Interactive Systems?
, In
Proceedings of the ACM VRST 2008
, pp. 17-20
.
2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik-blach:semanic-modelling:2008,
title = {Semantic Modelling for Virtual Worlds -- A Novel Paradigm for Realtime Interactive Systems?},
author = {Latoschik, Marc Erich and Blach, Roland},
booktitle = {Proceedings of the ACM VRST 2008},
year = {2008},
pages = {17-20},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/semantic-modeling-for-VW-VRST08.pdf}
}
Abstract: Engineering systems plays a central role in the development of suc-
cessful Virtual Reality (VR) and Augmented Reality (AR) appli-
cations. Increasing computational resources are utilized to build
increasingly complex artificial environments and extensive Human-
Computer Interaction (HCI) systems. These types of Realtime In-
teractive Systems (RIS) establish a closed HCI loop. They are char-
acterized as systems continuously analyzing users\u0027 input while con-
currently synthesizing appropriate output for several of the human
senses in real-time.
Marc Erich Latoschik,
Semantic Reflection - design and development of intelligent interactive 3D graphics systems
, In
Virtual Realities
G. Brunnett, S. Coquillart, G. Welch (Eds.),
.
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany
, 2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@incollection{Latoschik:2008aa,
title = {Semantic Reflection - design and development of intelligent interactive 3D graphics systems},
author = {Latoschik, Marc Erich},
editor = {Brunnett, G. and Coquillart, S. and Welch, G.},
booktitle = {Virtual Realities},
year = {2008},
publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany},
url = {http://drops.dagstuhl.de/opus/volltexte/2008/1634}
}
Abstract: From 1st to 6th June 2008, the Dagstuhl Seminar 08231 ``Virtual Realities\u0027\u0027 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Virtual Reality (VR) is a multidisciplinary area of research aimed at interactive human-computer mediated simulations of artificial environments. Typical applications include simulation, training, scientific visualization, and entertainment. An important aspect of VR-based systems is the stimulation of the human senses -- typically sight, sound, and touch -- such that a user feels a sense of presence (or immersion) in the virtual environment. Different applications require different levels of presence, with corresponding levels of realism, sensory immersion, and spatiotemporal interactive fidelity. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. Links to extended abstracts or full papers are provided, if available.
Article abstract:
The complexity of interactive 3D graphics systems continuously grows. Animated
virtual worlds require several processes to generate a believable and consistent
user impression. Advanced audio, physics, or AI-behaviors - to name a few - are
nowadays omnipresent in research as well as in entertainment applications. This
talk introduces a novel design and development paradigm for such interactive
and complex systems. ï°Semantic Reï°ectionï°‘ extends the well known reï°ection
principle of current programming languages using an explicit semantic layer. It
facilitates a uniform and integrative design of architectures, layouts, and inter-
faces even for complex systems. In addition, the paradigm provides an implicit
AI representation useful - if not necessary - in areas like, e.g., multimodal inter-
action, physical simulation, or intelligent virtual agents.
Nils Peuser, Marc Erich Latoschik,
Using Semantic Traversers to create persistent knowledge-based Virtual Reality applications
, In
Virtuelle und Erweiterte Realität, 5. Workshop of the GI VR & AR special interest group
, pp. 125-136
.
Shaker Verlag
, 2008.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{Peuser:2008aa,
title = {Using Semantic Traversers to create persistent knowledge-based Virtual Reality applications},
author = {Peuser, Nils and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 5. Workshop of the GI VR & AR special interest group},
year = {2008},
pages = {125-136},
publisher = {Shaker Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/using-semantic-traversers-GIVRAR08.pdf}
}
Abstract: This article introduces the concept of Semantic Traversers (STs) and exemplarily
illustrates its utilization inside a Virtual Reality (VR) platform for multimodal construction. The
development of reusable and parameterizable routines which are based on the concept of
Semantic Reflection is described. These routines work on the Knowledge Representation Layer
(KRL) of a simulation framework to realize complex application logic and data flow concepts
like field routing. The KRL is implemented by a functionally extended semantic network. The
ST development in C++ preserves real-time capabilities while the abstract description of data
structures and application logic realizes a persistent and platform-independent representation of
programs. The advantages of traverser representation through Semantic Reflection are
elaborated. Finally, an editing tool is presented that enables developers to visualize and modify
the semantic networks which are used to describe the application.
2007
Marc Erich Latoschik, Elmar Bomberg,
Augmenting a Laser Pointer with a Diffraction Grating for Monoscopic 6DOF Detection
, In
Journal of Virtual Reality and Broadcasting
, Vol.
4
(
14)
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{latoschik:2007:laserpointer,
title = {Augmenting a Laser Pointer with a Diffraction Grating for Monoscopic 6DOF Detection},
author = {Latoschik, Marc Erich and Bomberg, Elmar},
journal = {Journal of Virtual Reality and Broadcasting},
year = {2007},
volume = {4},
number = {14},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/augmenting-laser-pointer-IJVRB07.pdf}
}
Abstract: This article illustrates the detection of 6 degrees of freedom (DOF) for Virtual Environment interactions using a modified simple laser pointer device and a camera. The laser pointer is combined with a diffraction grating to project a unique laser grid onto the projection planes used in projection-based immersive VR setups. The distortion of the projected grid is used to calculate the translational and rotational degrees of freedom required for human-computer interaction purposes.
Matthias Donner, Thies Pfeiffer, Marc Erich Latoschik, Ipke Wachsmuth,
Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie
, In
Virtuelle und Erweiterte Realität, 4. Workshop of the GI special interest group VR/AR
, pp. 113-124
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{donner:2007:tiefenbestimmung,
title = {Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie},
author = {Donner, Matthias and Pfeiffer, Thies and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Virtuelle und Erweiterte Realität, 4. Workshop of the GI special interest group VR/AR},
year = {2007},
pages = {113-124},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Eyetracking-VRAR07.pdf}
}
Abstract: Für die Mensch-Maschine-Interaktion ist die Erfassung der Aufmerksamkeit des Benutzers von großem Interesse. Für Anwendungen in der Virtuellen Realität (VR) gilt dies insbesondere, nicht zuletzt dann, wenn Virtuelle Agenten als Benutzerschnittstelle eingesetzt werden. Aktuelle Ansätze zur Bestimmung der visuellen Aufmerksamkeit verwenden meist monokulare Eyetracker und daher auch nur zweidimensionale bedeutungstragende Blickfixationen relativ zu einer Projektionsebene. Für typische Stereoskopie-basierte VR Anwendungen ist aber eine zusätzliche Berücksichtigung der Fixationstiefe notwendig, um so den Tiefenparameter für die Interaktion nutzbar zu machen, etwa für eine höhere Genauigkeit bei der Objektauswahl (Picking). Das in diesem Beitrag vorgestellte Experiment zeigt, dass bereits mit einem einfacheren binokularen Gerät leichter zwischen sich teilweise verdeckenden Objekten unterschieden werden kann. Trotz des positiven Ergebnisses kann jedoch noch keine uneingeschränkte Verbesserung der Selektionsleistung gezeigt werden. Der Beitrag schließt mit einer Diskussion weiterer Schritte mit dem Ziel, die vorgestellte Technik weiter zu verbessern.
Thies Pfeiffer, Marc Erich Latoschik,
Interactive Social Displays
, In
Proceedings of the IEEE Symposium on 3D User Interfaces 2007
.
North Carolina
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{pfeiffer:2007:ISD1,
title = {Interactive Social Displays},
author = {Pfeiffer, Thies and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE Symposium on 3D User Interfaces 2007},
year = {2007},
address = {North Carolina},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/ISD-3DUI2007.pdf}
}
Abstract: The mediation of social presence is one of the most interesting challenges of modern communication technology. The proposed metaphor of Interactive Social Displays describes new ways of interactions with multi-/crossmodal interfaces prepared for a psychologically augmented communication. A first prototype demonstrates the application of this metaphor in a teleconferencing scenario.
Thies Pfeiffer, Marc Erich Latoschik,
Interactive Social Displays
, In
13th Eurographics Symposium on Virtual Environments/10th Workshop on Immersive Projection Technology IPT-EGVE 2007
, pp. 41-42
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{pfeiffer:2007:InteractiveSDs,
title = {Interactive Social Displays},
author = {Pfeiffer, Thies and Latoschik, Marc Erich},
booktitle = {13th Eurographics Symposium on Virtual Environments/10th Workshop on Immersive Projection Technology IPT-EGVE 2007},
year = {2007},
pages = {41-42},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2007-ipt-egve-isd.pdf}
}
Abstract: The mediation of social presence is one of the most interesting challenges of modern communication technology. The proposed metaphor of Interactive Social Displays describes new ways of interactions with multi-/crossmodal interfaces prepared for a psychologically augmented communication. A first prototype demonstrates the application of this metaphor in a teleconferencing scenario.
Felix Rabe, Christian Fröhlich, Marc Erich Latoschik,
Low-Cost Image Generation for Immersive Multi-Screen Environments
, In
Virtuelle und Erweiterte Realität, 4. Workshop of the GI VR & AR special interest group
, pp. 65-76
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{rabe:2007:low-cost,
title = {Low-Cost Image Generation for Immersive Multi-Screen Environments},
author = {Rabe, Felix and Fröhlich, Christian and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 4. Workshop of the GI VR & AR special interest group},
year = {2007},
pages = {65-76},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/low-cost-ig-VRAR07.pdf}
}
Abstract: This paper describes the configuration of a cost-efficient monolithic render server aimed at multi-screen Virtual Reality display devices. The system uses common Of-The-Shelf (OTS) PC components and feeds up to 6 independent screens via 3 graphics pipes with the potential to feed up to 12 screens. The internal graphics accelerators each use at least 8 PCIe lanes which results in sufficient bandwidth. Performance measurements are provided for several benchmarks which compare the system\u0027s performance to well established network based render clusters.
Peter Biermann, Christian Fröhlich, Marc Erich Latoschik, Ipke Wachsmuth,
Semantic Information and Local Constraints for Parametric Parts in Interactive Virtual Construction
, In
Proceedings of the 8th International Symposium on Smart Graphics 2007, SG2007
, pp. 124-134
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{biermann:2007:Semantic,
title = {Semantic Information and Local Constraints for Parametric Parts in Interactive Virtual Construction},
author = {Biermann, Peter and Fröhlich, Christian and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Proceedings of the 8th International Symposium on Smart Graphics 2007, SG2007},
year = {2007},
pages = {124-134},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/semantic-info-and-constraints-SG07.pdf}
}
Abstract: This paper introduces a semantic representation for virtual prototyping in interactive virtual construction applications. The representation reflects semantic information about dynamic constraints to define objects\u0027 modification and construction behavior as well as knowledge structures supporting multimodal interaction utilizing speech and gesture. It is conveniently defined using XML-based markup for virtual building parts. The semantic information is processed during runtime in two ways: Constraint graphs are mapped to a generalized data-flow network and scene-graph. Interaction knowledge is accessed and matched during multimodal analysis.
Marc Erich Latoschik,
Semantic Reflection -- Knowledge Based Design of Intelligent Simulation Environments
, In
Proceedings of the 30th German Conference on Artificial Intelligence KI07
, pp. 481-484
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2007:knowledge_based,
title = {Semantic Reflection -- Knowledge Based Design of Intelligent Simulation Environments},
author = {Latoschik, Marc Erich},
booktitle = {Proceedings of the 30th German Conference on Artificial Intelligence KI07},
year = {2007},
pages = {481-484},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/semantic-reflection-KI07.pdf}
}
Abstract: This paper introduces Semantic Reflection (SR), a design paradigm for intelligent applications which represents applications\u0027 objects and interfaces on a common knowledge representation layer (KRL). SR provides unified knowledge reflectivity specifically important for complex architectures of novel human-machine interface systems.
Marc Erich Latoschik, Christian Fröhlich,
Semantic Reflection for Intelligent Virtual Environments
, In
Proceedings of the IEEE VR 2007
, pp. 305-306
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2007:semantic_reflection,
title = {Semantic Reflection for Intelligent Virtual Environments},
author = {Latoschik, Marc Erich and Fröhlich, Christian},
booktitle = {Proceedings of the IEEE VR 2007},
year = {2007},
pages = {305-306},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/semantic-reflection-VR07.pdf}
}
Abstract: We introduce semantic reflection as an architectural concept for Intelligent Virtual Environments (IVEs). SCIVE, a dedicated IVE simulation core, combines modularity with close coupled integrative aspects to provide semantic reflection on multiple layers from low-level simulation core logic, specific simulation modules\u0027 appli-
cation definitions, to high-level semantic environment descriptions. SCIVE\u0027s Knowledge Representation Layer provides the central organizing structure which ties together data representations of simulation modules, e.g., for graphics, physics, audio, haptics, or AI etc., while it additionally allows bidirectional knowledge driven ac-
cess between the modules.
Marc Erich Latoschik, Christian Fröhlich,
Towards Intelligent VR: Multi-Layered Semantic Reflection for Intelligent Virtual Environments
, In
Proceedings of the Graphics and Applications GRAPP 2007
, pp. 249-259
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2007:intelligentVR,
title = {Towards Intelligent VR: Multi-Layered Semantic Reflection for Intelligent Virtual Environments},
author = {Latoschik, Marc Erich and Fröhlich, Christian},
booktitle = {Proceedings of the Graphics and Applications GRAPP 2007},
year = {2007},
pages = {249-259},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Towards-IVR-grapp07-latoschik.pdf}
}
Abstract: This paper introduces semantic reflection, a novel concept for a modular design of intelligent applications. SCIVE, a simulation core for intelligent Virtual Environments (IVEs), provides semantic reflection on multiple layers: SCIVE\u0027s architecture grants semantic driven uniform access to low-level simulation core logic, to specific simulation modules\u0027 application definitions, as well as to high-level semantic environment descriptions. It additionally provides a frame to conveniently interconnect various simulation modules, e.g., for graphics, physics, audio, haptics, or AI etc. SCIVE\u0027s Knowledge Representation Layer\u0027s base formalism provides the central organizing structure for the diverse modules\u0027 data representations. It allows bidirectional knowledge driven access between the modules since their specific data structures and functions are transitively reflected by the semantic layer. Hence SCIVE preserves, integrates and provides unified access to the development paradigms of the interconnected modules, e.g., scene graph metaphors or field route concepts etc. well known from todays Virtual Reality systems. SCIVE\u0027s semantic reflection implementation details are illustrated following a complex example application. We illustrate how semantic reflection and modularity support extensibility and maintainability of VR applications, potential for automatic system configuration and optimization, as well as the base for comprehensive knowledge driven access for IVEs.
Marc Erich Latoschik, B. Fröhlich (Eds.),
Virtuelle und Erweiterte Realität, 4. Workshop of the GI special interest group VR/AR
.
Shaker Verlag
, 2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{latoschik:ARVR:2007,
title = {Virtuelle und Erweiterte Realität, 4. Workshop of the GI special interest group VR/AR},
editor = {Latoschik, Marc Erich and Fröhlich, B.},
year = {2007},
publisher = {Shaker Verlag},
url = {}
}
Abstract: Die Fachgruppe Virtuelle und Erweiterte Realität der Gesellschaft für Informatik wurde im Jahr 2003 als Informationsplattform und Interessengemeinschaft gegründet. Seit 2004 organisiert die Gruppe einen jährlich stattfindenden Workshop, der am 15. Juli 2007 an der Bauhaus-Universität Weimar zu Gast ist. Dieser Band enthält die 22 ausgewählten Beiträge. Das Themenspektrum reicht von Mensch-Maschine-Interation über Simulations-, Projektions- und Rendertechnologien zu VR/AR-Anwendungszenarien. Wir sind besonders froh über die aktive Beteiligung vieler Nachwuchsforscher und Studenten -- ein Indiz für die anhaltende Attraktivität und Aktualität der Zukunftstechnologien Virtual Reality und Augmented Reality. Besonderer Dank gilt dem Programmkomitee und den vielen Helfern bei der lokalen Organisation, ohne deren Unterstützung die Organisation und Durchführung dieser Veranstaltung nicht möglich gewesen wäre. Wir freuen uns auf eine interessante Veranstaltung mit lebhaften Diskussionen.
Sebastian Hammerl, Tim Preuss, Marc Erich Latoschik,
WiiNC - Wii Network Control - Einsatz des Wii-Controllers für VR-Anwendungen
, In
Virtuelle und Erweiterte Realität, 4. Workshop of the GI VR & AR special interest group
, pp. 141-148
.
2007.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{hammerl:2007:WiiNC,
title = {WiiNC - Wii Network Control - Einsatz des Wii-Controllers für VR-Anwendungen},
author = {Hammerl, Sebastian and Preuss, Tim and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 4. Workshop of the GI VR & AR special interest group},
year = {2007},
pages = {141-148},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/WiiNC-GIVRAR07.pdf}
}
Abstract: In diesem Artikel wird die Anbindung eines low-cost Eingabegerätes für Virtuelle Umgebungen auf Basis des Nintendo Wii Controllers vorgestellt. Mit der WiiMote und dem Nunchuk ist es möglich, Beschleunigungen beider Hände zu erfassen. Durch Vibrieren und die integrierten LEDs stehen zudem Feedbackkanäle zur Verfügung. Dieser Artikel befasst sich mit den Möglichkeiten und Grenzen des Controllers. Er stellt eine Schnittstelle vor, mit der es möglich ist, den Wii-Controller sowohl an Einzelplatz- als auch in VR/AR-Systemen (z.B. CAVEs) effizient zu nutzen. Auch werden verschiedene Anwendungsbeispiele für die sinnvolle Nutzung vorgestellt.
2006
Elmar Bomberg, Marc Erich Latoschik,
Monoscopic 6DOF Detection using a Laser Pointer
, In
Virtuelle und Erweiterte Realität, 3. Workshop of the GI VR & AR special interest group
, pp. 143-154
.
2006.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{bomberg:2006:monoscopic,
title = {Monoscopic 6DOF Detection using a Laser Pointer},
author = {Bomberg, Elmar and Latoschik, Marc Erich},
booktitle = {Virtuelle und Erweiterte Realität, 3. Workshop of the GI VR & AR special interest group},
year = {2006},
pages = {143-154},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Mono6DofDetection-GIVRAR06.pdf}
}
Abstract: This article illustrates the detection of 6 degrees of freedom (DOF) by utilizing a simple laser pointer device and a camera. The laser pointer device is augmented by a diffraction grating to project a unique laser grid onto the projection planes used in projection based immersive VR-setups. The distortion of the projected grid is then used to calculate the additional degrees of freedom as required for human-computer interaction purposes.
Marc Erich Latoschik, Christian Fröhlich, Alexander Wendler,
Scene Synchronization in Close Coupled World Representations using SCIVE
, In
The International Journal of Virtual Reality
, Vol.
5
(
3)
, pp. 47-52
.
2006.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{latoschik:2006:scenesync,
title = {Scene Synchronization in Close Coupled World Representations using SCIVE},
author = {Latoschik, Marc Erich and Fröhlich, Christian and Wendler, Alexander},
journal = {The International Journal of Virtual Reality},
year = {2006},
volume = {5},
number = {3},
pages = {47-52},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/SCIVE-IJVR06.pdf}
}
Abstract: This paper introduces SCIVE, a Simulation Core for Intelligent Virtual Environments. SCIVE provides a Knowledge Representation Layer (KRL) as a central organizing structure. Based on a semantic net, it ties together the data representations of the various simulation modules, e.g., for graphics, physics, audio, haptics or Artificial Intelligence (AI) representations. SCIVE\u0027s open architecture allows a seamless integration and modification of these modules. Their data synchronization is widely customizable to support extensibility and maintainability. Synchronization can be controlled through filters which in turn can be instantiated and parametrized by any of the modules, e.g., the AI component can be used to change an object\u0027s behavior to be controlled by the physics instead of the interaction- or a keyframe-module. This bidirectional inter-module access is mapped by, and routed through, the KRL which semantically reflects all objects or entities the simulation comprises. Hence, SCIVE allows extensive application design and customization from low-level core logic, module configuration and flow control, to the simulated scene, all on a high-level unified representation layer while it supports well known development paradigms commonly found in Virtual Reality applications.
2005
Marc Erich Latoschik,
A User Interface Framework for Multimodal VR Interactions
, In
Proceedings of the ACM seventh International Conference on Multimodal Interfaces, ICMI 2005
, pp. 76-83
.
ACM
, 2005.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:UIF:2005,
title = {A User Interface Framework for Multimodal VR Interactions},
author = {Latoschik, Marc Erich},
booktitle = {Proceedings of the ACM seventh International Conference on Multimodal Interfaces, ICMI 2005},
year = {2005},
pages = {76-83},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/pp217-latoschik.pdf}
}
Abstract: This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments. Its configurable input and gesture processing components provide an advanced behavior graph capable of routing continuous data streams asynchronously. The framework introduces a Knowledge Representation Layer which augments objects of the simulated environment with Semantic Entities as a central object model that bridges and interfaces Virtual Reality (VR) and Artificial Intelligence (AI) representations. Specialized node types use these facilities to implement required processing tasks like gesture detection, preprocessing of the visual scene for multimodal integration, or translation of movements into multimodally
initialized gestural interactions. A modified Augmented Transition Nettwork (ATN) approach accesses the knowledge layer as well as the preprocessing components to integrate linguistic, gestural, and context information in parallel. The overall framework emphasizes extensibility, adaptivity and reusability, e.g., by utilizing persistent and interchangeable XML-based formats to describe its processing stages.
Guido Heumer, Malte Schilling, Marc Erich Latoschik,
Automatic Data Exchange and Synchronization for Knowledge-Based Intelligent Virtual Environments
, In
Proceedings of the IEEE VR2005
, pp. 43-50
.
2005.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{heumeretal:automaticdataexchange:VR2005,
title = {Automatic Data Exchange and Synchronization for Knowledge-Based Intelligent Virtual Environments},
author = {Heumer, Guido and Schilling, Malte and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE VR2005},
year = {2005},
pages = {43-50},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Automatic_Data_Exchange.pdf}
}
Abstract: Advanced VR simulation systems are composed of several components with independent and heterogeneously structured databases. To guarantee a closed and consistent world simulation, flexible and robust data exchange between these components has to be realized. This multiple database problem is well known in many distributed application domains, but it is central for VR setups composed of diverse simulation components. Particularly complicated is the exchange between object-centered and graph-based representation formats, where entity attributes may be distributed over the graph structure. This article presents an abstract declarative attribute representation concept, which handles different representation formats uniformly and enables automatic data exchange and synchronization between them. This mechanism is tailored to support the integration of a central knowledge component, which provides a uniform representation of the accumulated knowledge of the several simulation components involved. This component handles the incoming--possibly conflicting--world changes propagated by the diverse components. It becomes the central instance for process flow synchronization of several autonomous evaluation loops.
Marc Erich Latoschik, Peter Biermann, Ipke Wachsmuth,
High-level Semantics Representation for Intelligent Simulative Environments
, In
Proceedings of the IEEE VR2005
, pp. 283-284
.
2005.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2005:semantics_rep,
title = {High-level Semantics Representation for Intelligent Simulative Environments},
author = {Latoschik, Marc Erich and Biermann, Peter and Wachsmuth, Ipke},
booktitle = {Proceedings of the IEEE VR2005},
year = {2005},
pages = {283-284},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/high_level_semantics_representation.pdf}
}
Abstract: This article describes an integration of knowledge based techniques into simulative Virtual Reality (VR) applications motivated using a virtual construction task. An abstract Knowledge Representation Layer (KRL) provides a base formalism for the integration of simulation semantics. The KRL approach is demonstrated using a generalized scene graph representation which introduces an abstract implementation of geometric node interrelations.
Marc Erich Latoschik, Peter Biermann, Ipke Wachsmuth,
Knowledge in the Loop: Semantics Representation for Multimodal Simulative Environments
, In
Proceedings of the 5th International Symposium on Smart Graphics 2005
, pp. 25-39
.
2005.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:2005:knowledge_in_loop,
title = {Knowledge in the Loop: Semantics Representation for Multimodal Simulative Environments},
author = {Latoschik, Marc Erich and Biermann, Peter and Wachsmuth, Ipke},
booktitle = {Proceedings of the 5th International Symposium on Smart Graphics 2005},
year = {2005},
pages = {25-39},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/kitl-print-springer-final.pdf}
}
Abstract: This article describes the integration of knowledge based techniques into simulative Virtual Reality (VR) applications. The approach is motivated using multimodal Virtual Construction as an example domain. An abstract Knowledge Representation Layer (KRL) is proposed which is expressive enough to define all necessary data for diverse simulation tasks and which additionally provides a base formalism for the integration of Artificial Intelligence (AI) representations. The KRL supports two different implementation methods. The first method uses XSLT processing to transform the external KRL format into the representation formats of the diverse target systems. The second method implements the KRL using a functionally extendable semantic network. The semantic net library is tailored for real time simulation systems where it interconnects the required simulation modules and establishes access to the knowledge representations inside the simulation loop. The KRL promotes a novel object model for simulated objects called Semantic Entities which provides a uniform access to the KRL and which allows extensive system modularization. The KRL approach is demonstrated in two simulation areas. First, a generalized scene graph representation is presented
which introduces an abstract definition and implementation of geometric node interrelations. It supports scene and application structures which can not be expressed using common scene hierarchies or field route concepts. Second, the KRL\u0027s expressiveness is demonstrated in the design of multimodal interactions. Here, the KRL defines the knowledge particularly required during the semantic analysis of multimodal user utterances.
2004
Thies Pfeiffer, Marc Erich Latoschik,
Resolving Object References in multimodal Dialogues for Immersive Virtual Environments
, In
Proceedings of the IEEE Virtual Reality conference 2004
, pp. 35-42
.
2004.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{pfeiffer_latoschik:Resolving:VR2004,
title = {Resolving Object References in multimodal Dialogues for Immersive Virtual Environments},
author = {Pfeiffer, Thies and Latoschik, Marc Erich},
booktitle = {Proceedings of the IEEE Virtual Reality conference 2004},
year = {2004},
pages = {35-42},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Resolving_Object_References.pdf}
}
Abstract: This paper describes the underlying concepts and the technical implementation of a system for resolving multimodal references in Virtual Reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a so-called reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.
2003
Marc Erich Latoschik, Malte Schilling,
Incorporating VR Databases into AI Knowledge Representations: A Framework for Intelligent Graphics Applications
, In
Proceedings of the Sixth International Conference on Computer Graphics and Imaging
, pp. 79-84
.
ACTA Press
, 2003.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:Incorporating:CGIM03,
title = {Incorporating VR Databases into AI Knowledge Representations: A Framework for Intelligent Graphics Applications},
author = {Latoschik, Marc Erich and Schilling, Malte},
booktitle = {Proceedings of the Sixth International Conference on Computer Graphics and Imaging},
year = {2003},
pages = {79-84},
publisher = {ACTA Press},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/incorporating_VR_into_AI.pdf}
}
Abstract: This article presents a framework for incorporating commonly used VR (Virtual Reality) databases for
graphics and physics simulation into an AI (Artificial Intelligence) knowledge base using a unifying
semantic net (SN) representation. Several examples in the area of multimodal interaction for AI based
graphics applications are given to motivate this approach. An evaluation of the chosen SN knowledge representation (KR) regarding five roles suitable to analyze a given KR is followed by a discussion about
resulting conceptual and technical requirements for the underlying DB/KBMS (database/knowledge base management system) which supports the chosen KR as well as mediating layers for external simulation relevant modules.
Marc Erich Latoschik,
Multimodale Interaktion in Virtueller Realität am Beispiel der virtuellen Konstruktion
, In
KI-Künstliche Intelligenz: Embodied Conversational Agents
, Vol.
4
(
03)
, pp. 37-38
.
2003.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{latoschik:2003:mmivr,
title = {Multimodale Interaktion in Virtueller Realität am Beispiel der virtuellen Konstruktion},
author = {Latoschik, Marc Erich},
journal = {KI-Künstliche Intelligenz: Embodied Conversational Agents},
year = {2003},
volume = {4},
number = {03},
pages = {37-38},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/mmivr-am-bsp-KI-ECA03.pdf}
}
Abstract: Systeme der Virtuellen Realität (VR) stellen neue Herausforderungen an die Art und Weise der Systembedienung. Die Übertragung herkömmlicher 2D orientierter WIMP-Schnittstellen (Windows, Icon, Menue, Pointer) erweist sich hier häufig als ungeeignet. In diesem Zusammenhang verfolgt die vorgestellte Arbeit den Ansatz, VR-Interaktionen über natürliche menschliche Kommunikationsmöglichkeiten mit Gestik und Sprache zu realisieren. Zu diesem Ziel wurden Kernkomponenten entwickelt, welche in den Echtzeit-getriebenen Programmfluss von VR Systemen eingebettet sind und Funktionen für die Gestikerkennung und -analyse sowie die multimodale Integration und Auswertung bereitstellen. Diese Komponenten werden in einer Reihe von aktuellen Projekten etwa im Kontext der Virtuellen Konstruktion, der Deixisforschung oder zur Interaktion mit MAX, einem antropomorphen Gegenüber in virtuellen Umgebungen eingesetzt.
Thies Pfeiffer, Ian Voss, Marc Erich Latoschik,
Resolution of Multimodal Object References using Conceptual Short Term Memory
, In
Proceedings of the EuroCogSci03
, p. 462
.
Lawrence Erlbaum Associates Inc
, 2003.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{pfeiffer_etal:Resolution:EuroCogSci03,
title = {Resolution of Multimodal Object References using Conceptual Short Term Memory},
author = {Pfeiffer, Thies and Voss, Ian and Latoschik, Marc Erich},
booktitle = {Proceedings of the EuroCogSci03},
year = {2003},
pages = {462},
publisher = {Lawrence Erlbaum Associates Inc},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Resolution_of_Multimodal_Object_References.pdf}
}
Abstract: In the Collaborative Research Center (SFB 360) at the University of Bielefeld we are concerned wih situated
artificial communicators. In one application scenario the user is involved in a task-oriented discourse with
the embodied conversational agent MAX. He guides him through an assembly process by means of task descriptions uttered using both German natural language and gestures. The work presented is on a system for identifying referent objects for deictic references in a real-time Virtual Reality (VR) Environment.
2002
Marc Erich Latoschik,
Designing Transition Networks for Multimodal VR-Interactions Using a Markup Language
, In
Proceedings of the Fourth ACM International Conference on Multimodal Interfaces ICMI\u002702, Pittsburgh, Pennsylvania
, pp. 411-416
.
ACM
, 2002.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:Designing:ICMI02,
title = {Designing Transition Networks for Multimodal VR-Interactions Using a Markup Language},
author = {Latoschik, Marc Erich},
booktitle = {Proceedings of the Fourth ACM International Conference on Multimodal Interfaces ICMI\u002702, Pittsburgh, Pennsylvania},
year = {2002},
pages = {411-416},
publisher = {ACM},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/dtn_for_multimodal.pdf}
}
Abstract: This article presents one core component for enabling multimodal-speech and gesture-driven interaction in and for Virtual Environments. A so-called temporal Augmented Transition Network (tATN) is introduced. It allows to integrate and evaluate information from speech, gesture, and a given application context using a combined syntactic/semantic parse approach. This tATN represents the target structure for a multimodal integration markup language (MIML). MIML centers around the specification of multimodal interactions by letting an application designer declare temporal and semantic relations between given input utterance percepts and certain application states in a declarative and portable manner. A subsequent parse pass translates MIML into corresponding tATNs which are directly loaded and executed by a simulation engines scripting facility.
Bernhard Jung, Marc Erich Latoschik, Peter Biermann, Ipke Wachsmuth,
Virtuelle Werkstatt
, In
1. Paderborner Workshop Augmented Reality / Virtual Reality in der Produktentstehung
, pp. 185-196
.
HNI
, 2002.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{jung_etal:2002:VWerk,
title = {Virtuelle Werkstatt},
author = {Jung, Bernhard and Latoschik, Marc Erich and Biermann, Peter and Wachsmuth, Ipke},
booktitle = {1. Paderborner Workshop Augmented Reality / Virtual Reality in der Produktentstehung},
year = {2002},
pages = {185-196},
publisher = {HNI},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/jlbw02.pdf}
}
Abstract: Das Projekt "Virtuelle Werkstatt" will Forschungsarbeiten aus den Bereichen Multimodale Interaktion und Virtuelles Konstruieren erweitern und derart zusammenführen, daß ihre realitätsnahe Erprobung in der Virtuellen Realität (VR) demonstrierbar wird. Multimodale Interaktion betrifft die unmittelbare Umsetzung
von Benutzereingriffen in einer visualisierten 3D-Szene aufgrund von sprachbegleiteten Gesteneingaben. Virtuelles Konstruieren betrifft die Erstellung und Erprobung computergraphisch visualisierter 3D-Modelle geplanter mechanischer Konstruktionen (sog. virtueller Prototypen), um eine realistische Vorabexploration
von Entwürfen per Simulation in der Virtuellen Realität zu ermöglichen. Der Einsatz eines Cave-artigen VR-Großdisplays macht hierbei gleichzeitig Benutzerinteraktionen mit sprachbegleiteten Gesteneingaben im Greifraum wie auch im Fernraum erforschbar.
Peter Biermann, Bernhard Jung, Marc Erich Latoschik, Ipke Wachsmuth,
Virtuelle Werkstatt: A Platform for Multimodal Assembly in VR
, In
Proceedings Fourth Virtual Reality International Conference (VRIC 2002), Laval, France
, pp. 53-62
.
2002.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{biermann_etal:2002:VWerk,
title = {Virtuelle Werkstatt: A Platform for Multimodal Assembly in VR},
author = {Biermann, Peter and Jung, Bernhard and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Proceedings Fourth Virtual Reality International Conference (VRIC 2002), Laval, France},
year = {2002},
pages = {53-62},
url = {http://www.techfak.uni-bielefeld.de/~ipke/download/VWerkDruck.pdf}
}
Abstract: In this paper we describe ongoing research that aims at the development of a generic demonstration platform for virtual prototype modeling by utilizing multimodal speech and gesture interactions in Virtual Reality. Particularly, we concentrate on two aspects. First, a knowledge-based approach for assembling CAD-based parts in VR is introduced. This includes a system to generate meta-information from geometric models
as well as accompanying task-level algorithms for virtual assembly. Second, a framework for modeling multimodal interaction using gesture and speech is presented that facilitates its generic adaptation to
scene-graph-based applications. The chosen decomposition of the required core modules is exemplified by an example of a typical object rotation interaction.
2001
T. Sowa, S. Kopp, Marc Erich Latoschik,
A Communicative Mediator in a Virtual Environment: Processing of Multimodal Input and Output
, In
Proceedings of the International Workshop on Information Presentation and Natural Multimodal Dialogue
, pp. 71-74
.
2001.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{sow:a_communicative_mediator01,
title = {A Communicative Mediator in a Virtual Environment: Processing of Multimodal Input and Output},
author = {Sowa, T. and Kopp, S. and Latoschik, Marc Erich},
booktitle = {Proceedings of the International Workshop on Information Presentation and Natural Multimodal Dialogue},
year = {2001},
pages = {71-74},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Verona.pdf}
}
Abstract: This paper presents work on multimodal communication with an anthropomorphic agent. It focuses on processing of multimodal input and output employing natural language and gestures in virtual environments. On the input side, we describe our approach to recognize and interpret co-verbal gestures used for pointing, object manipulation, and object description. On the output side, we present the utterance generation module of the agent which is able to produce coordinated speech and gestures.
Marc Erich Latoschik,
A General Framework for Multimodal Interaction in Virtual Reality Systems: PrOSA
, In
The Future of VR and AR Interfaces - Multimodal, Humanoid, Adaptive and Intelligent. Proceedings of the Workshop at IEEE Virtual Reality 2001
(
138)
, pp. 21-25
.
2001.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lat:PrOSA-framework,
title = {A General Framework for Multimodal Interaction in Virtual Reality Systems: PrOSA},
author = {Latoschik, Marc Erich},
booktitle = {The Future of VR and AR Interfaces - Multimodal, Humanoid, Adaptive and Intelligent. Proceedings of the Workshop at IEEE Virtual Reality 2001},
year = {2001},
number = {138},
pages = {21-25},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/A_General_Framework_for_Multimodal.pdf}
}
Abstract: This article presents a modular approach to incorporate multimodal gesture and speech driven interaction into virtual reality systems. Based on existing techniques for modelling VR-applications, the overall task is separated into different problem categories: from sensor synchronisation to a high-level description of crossmodal temporal and semantic coherences, a set of solution concepts is presented that seamlessly fit into both the static (scenegraph-based) representation and into the dynamic (renderloop and immersion) aspects of a realtime application. The developed framework establishes a connecting layer between raw sensor data and a general functional description of multimodal and scenecontext related evaluation procedures for VR-setups.
As an example for the concepts, their implementation in a system for virtual construction is described.
Marc Erich Latoschik,
A Gesture Processing Framework for Multimodal Interaction in Virtual Reality
, In
Proceedings of the 1st International Conference on Computer Graphics, Virtual Reality and Visualisation in Africa, AFRIGRAPH 2001
, pp. 95-100
.
ACM SIGGRAPH
, 2001.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{latoschik:gestureprocessing:01,
title = {A Gesture Processing Framework for Multimodal Interaction in Virtual Reality},
author = {Latoschik, Marc Erich},
booktitle = {Proceedings of the 1st International Conference on Computer Graphics, Virtual Reality and Visualisation in Africa, AFRIGRAPH 2001},
year = {2001},
pages = {95-100},
publisher = {ACM SIGGRAPH},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/A_gesture_processing_framework.pdf}
}
Abstract: This article presents a gesture detection and analysis framework for modelling multimodal interactions. It is particulary designed for its use in Virtual Reality (VR) applications and contains an abstraction layer for different sensor hardware. Using the framework, gestures are described by their characteristic spatio-temporal features which are on the lowest level calculated by simple predefined detector modules or nodes. These nodes can be connected by a data routing mechanism to perform more elaborate evaluation functions, therewith establishing complex detector nets. Typical problems that arise from the time-dependent invalidation of multimodal utterances under immersive conditions lead to the development of pre-evaluation concepts that as well support their integration into scene graph based systems to support traversal-type access. Examples of realized interactions illustrate applications which make use of the described concepts.
Ipke Wachsmuth, Ian Voss, Timo Sowa, Marc Erich Latoschik, Stefan Kopp, Bernhard Jung,
Multimodale Interaktion in der Virtuellen Realität
, In
Mensch & Computer 2001
(
55)
, pp. 265-274
.
B.G. Teubner Stuttgart
, 2001.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wachsmuth:MMIVR,
title = {Multimodale Interaktion in der Virtuellen Realität},
author = {Wachsmuth, Ipke and Voss, Ian and Sowa, Timo and Latoschik, Marc Erich and Kopp, Stefan and Jung, Bernhard},
booktitle = {Mensch & Computer 2001},
year = {2001},
number = {55},
pages = {265-274},
publisher = {B.G. Teubner Stuttgart},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/wachsmuth_mc2001.pdf}
}
Abstract: Virtuelle Realität oder Virtual Reality (VR) bezeichnet ein neuartiges Kommunikationsmedium, das die
unmittelbare Wechselwirkung des Menschen mit räumlich organisierten rechnergenerierten Darstellungen
erlaubt. Verbunden mit körperlich verankerter Interaktion finden insbesondere gestische Eingaben starkes
Interesse. Dieser Beitrag gibt einen Überblick über Forschungsarbeiten im Labor für Künstliche Intelligenz
und Virtuelle Realität an der Universität Bielefeld, mit denen Grundlagen für den Einsatz gestischer und
sprachlicher Interaktionstechniken entwickelt werden; als Erprobungsdomäne dient ein Szenario des virtuellen
Konstruierens. Für die schnelle Erfassung komplexer Hand-Armgesten werden derzeit Datenhandschuhe und
Körper-Tracker eingesetzt. Die Auswertung erfolgt mit wissensbasierten Ansätzen, die atomare Formelemente
der Gestik symbolisch beschreiben und zu größeren Einheiten zusammensetzen. Ein zweites Thema ist die
multimodale Interaktion durch sprachlich-gestische Eingaben, z.B. wenn auf einen Gegenstand gezeigt
("dieses Rohr") oder eine Drehrichtung ("so herum") signalisiert wird. Schließlich wird dargestellt, wie die
Ansätze zur Formbeschreibung von Gesten für die Synthese natürlich wirkender gestischer Ausgaben mit
einer artikulierten, anthropomorphen Figur übertragen werden können, die in laufenden Arbeiten mit
Sprachausgaben koordiniert werden.
Marc Erich Latoschik,
Multimodale Interaktion in Virtueller Realität am Beispiel der virtuellen Konstruktion
.
infix, Berlin
, 2001.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@phdthesis{latoschik:MIVR,
title = {Multimodale Interaktion in Virtueller Realität am Beispiel der virtuellen Konstruktion},
author = {Latoschik, Marc Erich},
year = {2001},
publisher = {infix, Berlin},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/l01MMIVR.pdf}
}
Abstract: Virtuelle Realität (VR) stellt durch die Abkehr von Bildschirm- und Desktop-zentrierten Metaphern einen Umbruch in der Multimedia- technik dar. Die Systemhandhabung mit herkömmlichen Eingabe- geräten bei Erhalt der Bewegungsfreiheit und der Einbettung des Benutzers wird allerdings zusehends unnatürlich und erfordert ein Überdenken bisheriger Bedienungskonzepte. Im vorliegenden Band wird ein Ansatz ausgearbeitet, der neuartige - multimodale - Interaktionsformen in virtuellen Umgebungen er- möglicht. Dafür werden modulare Komponenten für eine Gestener- kennung (PrOSA - Patterns on Sequences of Attributes) und für ihre multimodale Integration mit Sprache bereitgestellt (erweiterter ATN-Formalismus). Besonderes Augenmerk gilt der Übertragbarkeit auf heutige high-end VR-Systeme.
2000
Bernhard Jung, Stefan Kopp, Marc Erich Latoschik, Timo Sowa, Ipke Wachsmuth,
Virtuelles Konstruieren mit Gestik und Sprache
, In
Künstliche Intelligenz
, Vol.
2/00
, pp. 5-11
.
arenDTaP Verlag, Bremen
, 2000.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{jun:VirtuellesKonstruieren,
title = {Virtuelles Konstruieren mit Gestik und Sprache},
author = {Jung, Bernhard and Kopp, Stefan and Latoschik, Marc Erich and Sowa, Timo and Wachsmuth, Ipke},
journal = {Künstliche Intelligenz},
year = {2000},
volume = {2/00},
pages = {5-11},
publisher = {arenDTaP Verlag, Bremen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Virtuelles-Konstruieren-KI00.pdf}
}
Abstract: Im Bielefelder Labor für Künstliche Intelligenz und Virtuelle Realität liegt der Forschungsschwerpunkt auf der Integration von gestischer und sprachlicher Kommunikation in einem Szenario des Virtuellen Konstruierens. Dabei werden hochaufgelöste räumliche Visualisierungen CAD-basierter Bauteilmodelle in realistischer Größe auf einer Großbildleinwand präsentiert und über Eingabegeräte der Virtuellen Realität (Datenhandschuhe, Positionssensoren, Spracherkennungssystem) zu komplexen Aggregaten zusammengebaut. Wissensbasierte Techniken kommen dabei einerseits bei der Montagesimulation mit den computergraphischen Bauteilmodellen und andererseits bei der Auswertung der sprachlich-gestischen Eingaben zum Einsatz.
1999
Marc Erich Latoschik, Bernhard Jung, Ipke Wachsmuth,
Multimodale Interaktion mit einem System zur Virtuellen Konstruktion
, In
Proceedings der 29. Jahrestagung der Gesellschaft für Informatik - Informatik\u002799, Informatik überwindet Grenzen
, pp. 88-97
.
Springer-Verlag
, 1999.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lat:MIS,
title = {Multimodale Interaktion mit einem System zur Virtuellen Konstruktion},
author = {Latoschik, Marc Erich and Jung, Bernhard and Wachsmuth, Ipke},
booktitle = {Proceedings der 29. Jahrestagung der Gesellschaft für Informatik - Informatik\u002799, Informatik überwindet Grenzen},
year = {1999},
pages = {88-97},
publisher = {Springer-Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/mis_konstruktion.pdf}
}
Abstract: Dieser Beitrag stellt ein System für die sprachlich-gestische Interaktion zur Steuerung eines Systems zur Virtuellen Konstruktion vor. Eine Übersicht über verschiedene Manipulationsaufgaben in dieser Domäne dient als Grundlage, um Interaktionsbeispiele zu erläutern. Neben deiktischen Gesten des Benutzers werden mimetische Gesten, die gewünschte Veränderungen "vormachen", betrachtet. Diese werden durch sprachliche oder gestische Trigger eingeleitet und bewirken eine Anpassung in den Funktionsmodi der Auswertung, wobei zwischen diskreten und kontinuierlichen Interaktionen unterschieden wird. Um kontinuierliche Modifikationen in der virtuellen Szene umzusetzen, werden neben dem Konzept der Manipulatoren sogenannte Aktuatoren als Repräsentanten für Benutzermodalitäten sowie Motion-Modifikatoren zur Korrektur unscharfer Sensor-Eingaben eingeführt.
Marc Erich Latoschik, Ipke Wachsmuth,
Sprachgestützte gestische Interaktion zur Steuerung Virtueller Konstruktion
, In
Tagungsband zum Workshop des Forschungsverbundes NRW -- Die Virtuelle Wissensfabrik -- vom 23./24. September 1999
.
GMD
, 1999.
[BibTeX]
[Download]
[BibSonomy]
@inproceedings{lat:SGI,
title = {Sprachgestützte gestische Interaktion zur Steuerung Virtueller Konstruktion},
author = {Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Tagungsband zum Workshop des Forschungsverbundes NRW -- Die Virtuelle Wissensfabrik -- vom 23./24. September 1999},
year = {1999},
publisher = {GMD},
url = {}
}
Timo Sowa, Martin Fröhlich, Marc Erich Latoschik,
Temporal Symbolic Integration Applied to a Multimodal System Using Gestures and Speech
, In
Gesture-Based Communication in Human-Computer Interaction - Proceedings of the International Gesture Workshop (Gif-sur-Yvette, France, March 1999)
, pp. 291-302
.
Springer-Verlag
, 1999.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{sow:TSI,
title = {Temporal Symbolic Integration Applied to a Multimodal System Using Gestures and Speech},
author = {Sowa, Timo and Fröhlich, Martin and Latoschik, Marc Erich},
booktitle = {Gesture-Based Communication in Human-Computer Interaction - Proceedings of the International Gesture Workshop (Gif-sur-Yvette, France, March 1999)},
year = {1999},
pages = {291-302},
publisher = {Springer-Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Temporal-GW99.pdf}
}
Abstract: This paper presents a technical approach for temporal symbol integration aimed to be generally applicable in unimodal and multimodal user interfaces. It draws its strength from symbolic data representation and an underlying rulebased system, and is embedded in a multi-agent system. The core method for temporal integration is motivated by findings from cognitive science research. We discuss its application for a gesture recognition task and speech-gesture integration in a Virtual Construction scenario. Finally an outlook of an empirical evaluation is given.
1998
Marc Erich Latoschik, Ipke Wachsmuth,
Exploiting Distant Pointing Gestures for Object Selection in a Virtual Environment
, In
Gesture and Sign Language in Human-Computer Interaction
, Vol.
1371
, pp. 185-196
.
Springer-Verlag
, 1998.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lat:EDP,
title = {Exploiting Distant Pointing Gestures for Object Selection in a Virtual Environment},
author = {Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Gesture and Sign Language in Human-Computer Interaction},
year = {1998},
volume = {1371},
pages = {185-196},
publisher = {Springer-Verlag},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/exploiting_distant_pointing_gestures.pdf}
}
Abstract: Developing state of the art multimedia applications nowadays calls for the use of sophisticated visualisation and immersion techniques, commonly referenced as Virtual Reality. While Virtual Reality meanwhile reaches good results both in image quality and in fast user feedback using parallel computation techniques, the methods for interacting with these systems need to be improved. In this paper we introduce a multimedia application that uses a gesture-driven interface and, secondly, the architecture for an expandable gesture recognition system. After different gesture types for interaction in a virtual environment are discussed with respect to a required functionality, the implementation of a specific gesture detection module for distant pointing recognition is described, and the whole system design is tested for its task adequacy.
Bernhard Jung, Marc Erich Latoschik, Ipke Wachsmuth,
Knowledge-Based Assembly Simulation for Virtual Prototype Modeling
, In
IECON\u002798: Proceedings of the 24th annual Conference of the IEEE Industrial Electronics Society
, Vol.
4
, pp. 2152-2157
.
1998.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{jun:KBA,
title = {Knowledge-Based Assembly Simulation for Virtual Prototype Modeling},
author = {Jung, Bernhard and Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {IECON\u002798: Proceedings of the 24th annual Conference of the IEEE Industrial Electronics Society},
year = {1998},
volume = {4},
pages = {2152-2157},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/KBbased-assembly-IECON98.pdf}
}
Abstract: The idea of Virtual Prototyping is the use of realistic digital product models for design and functionality analysis in early stages of the product development cycle. The goal of our research is to make modeling of virtual prototypes more intuitive and powerful by using knowledge enhanced Virtual Reality techniques for interactive construction of virtual prototypes from 3D-visualized, CAD-based parts. To this end, a knowledge-based approach for real-time assembly simulation has been developed that draws on dynamically updated representations of part matings and assembly structure. The approach has been implemented in an experimental system, the CODY Virtual Constructor, that supports a variety of interaction modalities, such as direct manipulation, natural language, and gestures.
Marc Erich Latoschik, Ipke Wachsmuth,
Sprachbegleitete Körper-Gestik vor multimedialen Großdisplays
, In
Forschung an der Universität Bielefeld
, Vol.
17
, pp. 7-9
.
Universität Bielefeld, Informations- und Pressestelle
, 1998.
[BibTeX]
[Download]
[BibSonomy]
@incollection{lat:sprachbegleitete,
title = {Sprachbegleitete Körper-Gestik vor multimedialen Großdisplays},
author = {Latoschik, Marc Erich and Wachsmuth, Ipke},
booktitle = {Forschung an der Universität Bielefeld},
year = {1998},
volume = {17},
pages = {7-9},
publisher = {Universität Bielefeld, Informations- und Pressestelle},
url = {}
}
Marc Erich Latoschik, Martin Fröhlich, Bernhard Jung, Ipke Wachsmuth,
Utilize Speech and Gestures to Realize Natural Interaction in a Virtual Environment
, In
IECON\u002798: Proceedings of the 24th annual Conference of the IEEE Industrial Electronics Society
, Vol.
4
, pp. 2028-2033
.
1998.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{lat:USG,
title = {Utilize Speech and Gestures to Realize Natural Interaction in a Virtual Environment},
author = {Latoschik, Marc Erich and Fröhlich, Martin and Jung, Bernhard and Wachsmuth, Ipke},
booktitle = {IECON\u002798: Proceedings of the 24th annual Conference of the IEEE Industrial Electronics Society},
year = {1998},
volume = {4},
pages = {2028-2033},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/usg_to_realize.pdf}
}
Abstract: Virtual environments are a new means for human-computer interaction. Whereas techniques for visual presentation have reached a high level of maturity in recent years, many of the input devices and interaction techniques still tend to be awkward for this new media. Where the borders between real and artificial environments vanish, a more natural way of interaction is desirable. To this end, we investigate the benefits of integrated speech- and gesture-based interfaces for interacting with virtual environments. Our research results are applied within a virtual construction scenario, where 3D visualized mechanical objects can be spatially rearranged and assembled using speech- and gesture-based communication.
1997
Ipke Wachsmuth, Britta Lenzmann, Tanja Jörding, Bernhard Jung, Marc Erich Latoschik, Martin Fröhlich,
A Virtual Interface Agent und its Agency
, In
Proceedings of the First International Conference on Autonomous Agents
, pp. 516-517
.
1997.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{wac:avirtualinterface,
title = {A Virtual Interface Agent und its Agency},
author = {Wachsmuth, Ipke and Lenzmann, Britta and Jörding, Tanja and Jung, Bernhard and Latoschik, Marc Erich and Fröhlich, Martin},
booktitle = {Proceedings of the First International Conference on Autonomous Agents},
year = {1997},
pages = {516-517},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/A_Virtual_Interface_Agent.pdf}
}
Abstract: In the VIENA Project ("Virtual Environments and Agents") we develop easy-to-use virtual environments for interactive design and exploration. We have modeled and implemented a synthetic human-like agent, Hamilton, that inhabits a simulated office environment and acts as an embodied virtual interface agent (VIA). To explore or change the simulated environment, people can instruct Hamilton by way of verbal input and simple hand gestures. Hamilton has a variety of functionalities which are put in effect by its agency, a multi-agent system. In mediating an instruction, invisible agents track exact object locations and colorings, and they negotiate alternative ways of acting. Hamilton\u0027s agency is also able to adapt to individual users\u0027 preferences during run time. As the VIA is present in the synthetic scene, users can take advantage of its anthropomorphic features, and they can choose to communicate with the agent from an external or an immersed view.