Development of an Animation System for Social XR Applications in Unity
This project is already assigned.
Motivation
Social XR allows users with varying devices to interact with each other, while embodied interaction improves the social interaction between users (Smith and Neff, 2018). While there are advancements that generate full body motion for users with only three point tracking (Jiang et al, 2022), there are less immersive devices without tracking capabilities like desktop computers or smartphones. The research area of motion generation synthesizes animations for talking gestures based on the speech of the user (Krome and Kopp, 2023). However, they do not generate the motions for walking or interacting, which are important usecases for social XR. However, Unity’s animation system allows to implement movement sequences like in many video games.
Goal
This projects focuses on developing a comprehensive animation system within the Unity engine, aimed at improving user experience in social XR applications on both desktop and smartphone platforms. The task involves creating an animation system and blending to animations generated by speech.
- Animation System: Utilize Unity’s animation tools to develop animation system that respond to specific inputs, such as walking or idleing. This will involve using Unity’s animation capabilities and scripting to produce smooth, realistic movements that can be influenced through user interactions.
- Blending Mechanism: Design and implement a system for seamlessly blending animations generated by Unity’s animation system with the motions produced by the speech-driven motion generation model.
Tasks
The topic will focus on the following tasks:
- Literature and state-of-the-art research
- Creating an animation system focusing on walking
- Blending animations of generative model and the animation system
- Conduction of a user study
- Evaluation and presentation of results
Prerequisits
- Experience with Unity game engine
Literature
- Smith, H. J., & Neff, M. (2018, April). Communication behavior in embodied virtual reality. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-12).
- Jiang, J., Streli, P., Qiu, H., Fender, A., Laich, L., Snape, P., & Holz, C. (2022, October). Avatarposer: Articulated full-body pose tracking from sparse motion sensing. In European Conference on Computer Vision (pp. 443-460). Cham: Springer Nature Switzerland.
- Krome, N., & Kopp, S. (2023, September). Towards Real-time Co-speech Gesture Generation in Online Interaction in Social XR. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
Contact Persons at the University Würzburg
Christian Merz (Primary Contact Person)Human-Computer Interaction, Psychology of Intelligent Interactive Systems, Universität Würzburg
christian.merz@uni-wuerzburg.de
Prof. Dr. Marc Erich Latoschik
Human-Computer Interaction, Universität Würzburg
marc.latoschik@uni-wuerzburg.de
Prof. Dr. Carolin Wienrich
Psychology of Intelligent Interactive Systems, Universität Würzburg
carolin.wienrich@uni-wuerzburg.de