Human-Computer Interaction

Evaluate Full Body Pose Estimation Based on Sparse Tracking


This call for a thesis or project is open for the following modules:
If you are interested, please get in touch with the primary contact person listed below.

Applications for social interaction in virtual worlds are one of the promising application areas of virtual reality (VR). In social virtual reality applications, users are represented by avatars, which in most cases are only a torso with hands and a head due to limited tracking capabilities, although full-body avatars may be superior (Pan and Steed, 2019). There are different approaches to animate the full body of the avatar, either based on full body motion capture (Anvari et al., 2022) or based on the user’s controllers and headset movements using inverse kinematics (Aristidou et al., 2018; Anvari et al., 2022) or deep learning (Jiang et al., 2022; Winkler et al., 2022; Ponton et al. 2022; Anvari et al., 2022; Anvari et al., 2023).

Goal

The goal of this project is to integrate different approaches, e.g., inverse kinematics (Final IK) and deep learning (Meta Movement SDK), for composing the body posture based user’s controller and headset movements. The approaches will then be evaluated in a user study to compare the quality in terms of plausibility and humanity.

Tasks

The topic will focus on the following tasks:

Prerequisits

Literature

  1. Pan, Y., & Steed, A. (2019, November). Avatar type affects performance of cognitive tasks in virtual reality. In Proceedings of the 25th ACM symposium on virtual reality software and technology (pp. 1-4).
  2. Anvari, T., & Park, K. (2022, October). 3D Human Body Pose Estimation in Virtual Reality: A survey. In 2022 13th International Conference on Information and Communication Technology Convergence (ICTC) (pp. 624-628). IEEE.
  3. Aristidou, A., Lasenby, J., Chrysanthou, Y., & Shamir, A. (2018, September). Inverse kinematics techniques in computer graphics: A survey. In Computer graphics forum (Vol. 37, No. 6, pp. 35-58).
  4. Jiang, J., Streli, P., Qiu, H., Fender, A., Laich, L., Snape, P., & Holz, C. (2022). Avatarposer: Articulated full-body pose tracking from sparse motion sensing. In European Conference on Computer Vision (pp. 443-460). Springer, Cham.
  5. Winkler, A., Won, J., & Ye, Y. (2022). QuestSim: Human Motion Tracking from Sparse Sensors with Simulated Avatars. arXiv preprint arXiv:2209.09391.
  6. Ponton, J. L., Yun, H., Andujar, C., & Pelechano, N. (2022). Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices. arXiv preprint arXiv:2209.11478.
  7. Anvari, T., Park, K., & Kim, G. (2023). Upper Body Pose Estimation Using Deep Learning for a Virtual Reality Avatar. Applied Sciences, 13(4), 2460.

Contact Persons at the University Würzburg

Jonathan Tschanter (Primary Contact Person)
Human-Computer Interaction, Psychology of Intelligent Interactive Systems, Universität Würzburg
jonathan.tschanter@uni-wuerzburg.de

Christian Merz (Primary Contact Person)
Human-Computer Interaction, Psychology of Intelligent Interactive Systems, Universität Würzburg
christian.merz@uni-wuerzburg.de

Prof. Dr. Marc Erich Latoschik
Human-Computer Interaction, Universität Würzburg
marc.latoschik@uni-wuerzburg.de

Prof. Dr. Carolin Wienrich
Psychology of Intelligent Interactive Systems, Universität Würzburg
carolin.wienrich@uni-wuerzburg.de

Legal Information