

Am Mittwoch den 12.04.23 veranstaltet die Zentrale Studienberatung den Orientierungsvortrag „Erste Schritte ins Studium“ für Erstsemester aller Fachrichtungen.

On March 8, 2023 Prof. Latoschik will give a talk at an Alumni event. Seize the opportunity and join via Zoom!

From April 11th to 13th, the Würtual Reality XR Meeting 2023 will take place at the University of Würzburg, Germany. The HCI Chair offers various demonstrations as well as talks on current research issues for the participants.

On the 23rd and 24th of February we visited the AI.BAY 2023 in Munich on order to support the CAIDAS and to showcase two of our current projects together with the Center for Artificial Intelligence and Robotics (CAIRO).
Open Positions

We are looking for student workers to help develop and administer two VHB online courses

We have an open position for a motivated student worker in the VIA-VR Project (ELSI).

Wir haben eine offene Stelle im wissenschaftlichen Dienst für das AIL AT WORK Projekt.

We have open positions for motivated student workers in the HiAvA Project (Unity development).
Recent Publications
The NarRobot Plugin - Connecting the Social Robot Reeti to the Unity Game Engine, In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI).
2023.
[BibSonomy] [Doi]
,
[BibSonomy] [Doi]
@inproceedings{steinhaeusser2023narrobot,
author = {Sophia C. Steinhaeusser and Lenny Siol and Elisabeth Ganal and Sophia Maier and Birgit Lugrin},
year = {2023},
booktitle = {Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI)},
title = {The NarRobot Plugin - Connecting the Social Robot Reeti to the Unity Game Engine}
}
Abstract:
Using a Social Robot as a Hotel Assessment Tool, In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI).
2023.
[BibSonomy] [Doi]
,
[BibSonomy] [Doi]
@inproceedings{lein2023,
author = {Martina Lein and Melissa Donnermann and Sophia C. Steinhaeusser and Birgit Lugrin},
year = {2023},
booktitle = {Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI)},
title = {Using a Social Robot as a Hotel Assessment Tool}
}
Abstract:
Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality, In IEEE Transactions on Visualization and Computer Graphics, pp. 1-12.
2023.
[Download] [BibSonomy] [Doi]
,
[Download] [BibSonomy] [Doi]
@article{kern2023input,
author = {Florian Kern and Florian Niebling and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
url = {https://ieeexplore.ieee.org/document/10049665/},
year = {2023},
pages = {1-12},
title = {Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality}
}
Abstract:
This article compares two state-of-the-art text input techniques between non-stationary virtual reality (VR) and video see-through augmented reality (VST AR) use-cases as XR display condition. The developed contact-based mid-air virtual tap and wordgesture (swipe) keyboard provide established support functions for text correction, word suggestions, capitalization, and punctuation. A user evaluation with 64 participants revealed that XR displays and input techniques strongly affect text entry performance, while subjective measures are only influenced by the input techniques. We found significantly higher usability and user experience ratings for tap keyboards compared to swipe keyboards in both VR and VST AR. Task load was also lower for tap keyboards. In terms of performance, both input techniques were significantly faster in VR than in VST AR. Further, the tap keyboard was significantly faster than the swipe keyboard in VR. Participants showed a significant learning effect with only ten sentences typed per condition. Our results are consistent with previous work in VR and optical see-through (OST) AR, but additionally provide novel insights into usability and performance of the selected text input techniques for VST AR. The significant differences in subjective and objective measures emphasize the importance of specific evaluations for each possible combination of input techniques and XR displays to provide reusable, reliable, and high-quality text input solutions. With our work, we form a foundation for future research and XR workspaces. Our reference implementation is publicly available to encourage replicability and reuse in future XR workspaces.
An Approach to Investigate an Influence of Visual Angle Size on Emotional Activation During a Decision-Making Task, In HCII 2023.
Springer, Cham,
2023. to be published
[Download] [BibSonomy]
,
[Download] [BibSonomy]
@inproceedings{oberdorfer2023approach,
author = {Sebastian Oberdörfer and Sandra Birnstiel and Sophia C. Steinhaeusser and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2023-hcii-igt-visual-angles-preprint.pdf},
year = {2023},
booktitle = {HCII 2023},
publisher = {Springer, Cham},
series = {Lecture Notes in Computer Science},
title = {An Approach to Investigate an Influence of Visual Angle Size on Emotional Activation During a Decision-Making Task}
}
Abstract:
Decision-making is an important ability in our daily lives. Decision-making can be influenced by emotions. A virtual environment and objects in it might follow an emotional design, thus potentially influ- encing the mood of a user. A higher visual angle on a particular stimulus can lead to a higher emotional response to it. The use of immersive vir- tual reality (VR) surrounds a user visually with a virtual environment, as opposed to the partial immersion of using a normal computer screen. This higher immersion may result in a greater visual angle on a particu- lar stimulus and thus a stronger emotional response to it. In a between- subjects user study, we compare the results of a decision-making task in VR presented in three different visual angles. We used the Iowa Gambling Task (IGT) as task and to detect potential differences in decision-making. The IGT was displayed in one of three dimensions, thus yielding visual angles of 20◦, 35◦, and 50◦. Our results indicate no difference between the three conditions with respect to decision-making. Thus, our results possibly imply that a higher visual angle has no influence on a task that is influenced by emotions but is otherwise cognitive.
Extensible Motion-based Identification of XR Users with Non-Specific Motion, In arXiv, p. arXiv:2302.07517.
2023.
[Download] [BibSonomy] [Doi]
,
[Download] [BibSonomy] [Doi]
@preprint{2023arXiv230207517S,
author = {Christian Schell and Konstantin Kobs and Tamara Fernando and Andreas Hotho and Marc Erich Latoschik},
journal = {arXiv},
url = {https://arxiv.org/abs/2302.07517},
year = {2023},
pages = {arXiv:2302.07517},
title = {Extensible Motion-based Identification of XR Users with Non-Specific Motion}
}
Abstract:
Recently emerged solutions demonstrate that the movements of users interacting with extended reality (XR) applications carry identifying information and can be leveraged for identification. While such solutions can identify XR users within a few seconds, current systems require one or the other trade-off: either they apply simple distance-based approaches that can only be used for specific predetermined motions. Or they use classification-based approaches that use more powerful machine learning models and thus also work for arbitrary motions, but require full retraining to enroll new users, which can be prohibitively expensive. In this paper, we propose to combine the strengths of both approaches by using an embedding-based approach that leverages deep metric learning. We train the model on a dataset of users playing the VR game "Half-Life: Alyx" and conduct multiple experiments and analyses. The results show that the embedding-based method 1) is able to identify new users from non-specific movements using only a few minutes of reference data, 2) can enroll new users within seconds, while retraining a comparable classification-based approach takes almost a day, 3) is more reliable than a baseline classification-based approach when only little reference data is available, 4) can be used to identify new users from another dataset recorded with different VR devices. Altogether, our solution is a foundation for easily extensible XR user identification systems, applicable even to non-specific movements. It also paves the way for production-ready models that could be used by XR practitioners without the requirements of expertise, hardware, or data for training deep learning models.
Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR, In IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).
2023. To be published
[BibSonomy]
,
[BibSonomy]
@article{kern2023virtualtophysical,
author = {Florian Kern and Jonathan Tschanter and Marc Erich Latoschik},
journal = {IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
year = {2023},
title = {Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR}
}
Abstract:
The alignment of virtual to physical surfaces is essential to improve symbolic input and selection in XR. Previous techniques optimized for efficiency can lead to inaccuracies. We investigate regression-based refinement techniques and introduce a surface accuracy evaluation. The results revealed that refinement techniques can highly improve surface accuracy and show that accuracy depends on the gesture shape and surface dimension. Our reference implementation and dataset are publicly available.
Usability of a Mhealth Solution Using Speech Recognition for Point-of-care Diagnostic Management, In Journal of Medical Systems, Vol. 47(18).
2023.
[Download] [BibSonomy] [Doi]
,
[Download] [BibSonomy] [Doi]
@article{kerwagen2023,
author = {Fabian Kerwagen and Konrad F. Fuchs and Melanie Ullrich and Andreas Schulze and Samantha Straka and Philipp Krop and Marc E. Latoschik and Fabian Gilbert and Andreas Kunz and Georg Fette and Stefan Störk and Maximilian Ertl},
journal = {Journal of Medical Systems},
number = {18},
url = {https://link.springer.com/article/10.1007/s10916-022-01896-y},
year = {2023},
title = {Usability of a Mhealth Solution Using Speech Recognition for Point-of-care Diagnostic Management}
}
Abstract:
The administrative burden for physicians in the hospital can affect the quality of patient care. The Service Center Medical Informatics (SMI) of the University Hospital Würzburg developed and implemented the smartphone-based mobile application (MA) ukw.mobile1 that uses speech recognition for the point-of-care ordering of radiological examinations. The aim of this study was to examine the usability of the MA workflow for the point-of-care ordering of radiological examinations. All physicians at the Department of Trauma and Plastic Surgery at the University Hospital Würzburg, Germany, were asked to participate in a survey including the short version of the User Experience Questionnaire (UEQ-S) and the Unified Theory of Acceptance and Use of Technology (UTAUT). For the analysis of the different domains of user experience (overall attractiveness, pragmatic quality and hedonic quality), we used a two-sided dependent sample t-test. For the determinants of the acceptance model, we employed regression analysis. Twenty-one of 30 physicians (mean age 34\,$\pm$\,8 years, 62\% male) completed the questionnaire. Compared to the conventional desktop application (DA) workflow, the new MA workflow showed superior overall attractiveness (mean difference 2.15\,$\pm$\,1.33), pragmatic quality (mean difference 1.90\,$\pm$\,1.16), and hedonic quality (mean difference 2.41\,$\pm$\,1.62; all p\,$<$\,.001). The user acceptance measured by the UTAUT (mean 4.49\,$\pm$\,0.41; min. 1, max. 5) was also high. Performance expectancy (beta\,=\,0.57, p\,=\,.02) and effort expectancy (beta\,=\,0.36, p\,=\,.04) were identified as predictors of acceptance, the full predictive model explained 65.4\% of the variance. Point-of-care mHealth solutions using innovative technology such as speech-recognition seem to address the users' needs and to offer higher usability in comparison to conventional technology. Implementation of user-centered mHealth innovations might therefore help to facilitate physicians' daily work.
A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality, In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 1--2.
2023. To be published
[BibSonomy]
,
[BibSonomy]
@inproceedings{misiak2023subjective,
author = {Martin Mišiak and Arnulph Fuhrmann and Marc Erich Latoschik},
year = {2023},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {1--2},
title = {A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality}
}
Abstract:
Temporal reprojection is a popular method for mitigating sampling artifacts from a variety of sources. This work investigates it's impact on the subjective quality of specular reflections in Virtual Reality(VR). Our results show that temporal reprojection is highly effective at improving the visual comfort of specular materials, especially at low sample counts. A slightly diminished effect could also be observed in improving the subjective accuracy of the resulting reflection.