Integrating human motion generation based on sparse tracking to unity for full body movement
This call for a thesis or project is open for the following modules:
If you are interested, please get in touch with the primary contact person listed below.
Motivation
In social virtual reality applications, users are embodied by avatars, which in most cases are only a torso with hands and a head due to the sparse tracking provided by the devices. Although full-body avatars may be superior (Pan and Steed, 2019). To predict a complete human pose based on the sparse tracking are various approaches, including inverse kinematics (Caserman et al., 2019) and recently deep learning (Du et al., 2023; Jiang et al. 2022). This allows users to interact in virtual environments, potentially increasing the user experience.
Goal
This projects focuses on researching the current state of the art of deep learning for motion prediction based on sparse tracking, with the goal of integrating a model to a unity application providing a generated human body pose in real time. Therefore in this project existing models can be used, and have to be integrate in a unity application. This integration includes sending the raw data to the model and applying the outcome to an avatar in the unity application afterwards.
Tasks
The topic will focus on the following tasks:
- Literatur and state-of-the-art research
- Understanding and using existing generative machine learning models
- Integration of the model in a unity application
- Evaluation and presentation of results
Prerequisits
- Experience with Unity game engine
- Experience with machine learning and python
Literature
- Caserman, P., Achenbach, P., & Göbel, S. (2019, August). Analysis of inverse kinematics solutions for full-body reconstruction in virtual reality. In 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH) (pp. 1-8). IEEE.
- Du, Y., Kips, R., Pumarola, A., Starke, S., Thabet, A., & Sanakoyeu, A. (2023). Avatars grow legs: Generating smooth human motion from sparse tracking inputs with diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 481-490).
- Jiang, J., Streli, P., Qiu, H., Fender, A., Laich, L., Snape, P., & Holz, C. (2022, October). Avatarposer: Articulated full-body pose tracking from sparse motion sensing. In European Conference on Computer Vision (pp. 443-460). Cham: Springer Nature Switzerland.
Contact Persons at the University Würzburg
Christian Merz (Primary Contact Person)Human-Computer Interaction, Psychology of Intelligent Interactive Systems, Universität Würzburg
christian.merz@uni-wuerzburg.de
Lukas Schach (Primary Contact Person)
Human-Computer Interaction, Universität Würzburg
lukas.schach@uni-wuerzburg.de
Prof. Dr. Carolin Wienrich
Psychology of Intelligent Interactive Systems, Universität Würzburg
carolin.wienrich@uni-wuerzburg.de
Prof. Dr. Marc Erich Latoschik
Human-Computer Interaction, Universität Würzburg
marc.latoschik@uni-wuerzburg.de