Interpersonal Synchronization in Large-scale Virtual Environments
This project is already completed.

Background
Large-scale Virtual Environments (VEs) enable the simultaneous exploration of digital artifacts, like industrial prototypes or drafts of houses, by multiple users wearing Head-Mounted Displays (HMDs). In contrast to CAVE setups, the interpersonal communication is restricted, since the users cannot see each other. The incorporation of virtual avatars counteracts this restriction but requires sensors to detect and virtually replicate gestures and social signals.
In the course of this project, non-obstructive bluetooth acceleration sensors shall be attached to the users’ limbs. The data acquired by these sensors shall be analyzed using machine learning techniques to enable the user to trigger a small set of predefined animation sequences of her avatar, similar to emotes in video games.
Tasks
- Define a small set of emotes as well as associated gestures to trigger them
- Adapt the avatar module UnrealMe to support the animation of the defined emotes
- Use the bluetooth acceleration sensors to collect training sets for all gestures
- Select a machine learning technique and train it to classify the gestures
- Connect the implementation with the large scale tracking system of the Fraunhofer IIS
- Implement a demonstrator that facilitates the exploration of a large VE by at least two users simultaneously
Conditions
The large scale tracking system of the Fraunhofer IIS in Nürnberg will be used. This requires participants either to stay in Nürnberg or to commute there for about 3 to 4 weeks.
Partners
Fraunhofer IIS
Tobias Feigl
Dr. Stephan Otto
Dr.-Ing. Christopher Mutschler
Contact Persons at the University Würzburg
Daniel Roth (Primary Contact Person)HCI, Würzburg University
daniel.roth@uni-wuerzburg.de