Human-Computer Interaction

Motion Data Sensor Fusion


This call for a thesis or project is open for the following modules:
If you are interested, please get in touch with the primary contact person listed below.

Background

Relying on a single sensor type for capturing data renders the application prone to intrinsic noise and drift of the sensors. Smartphones do overcome this by fusing information of GPS position and measurable physical quantities like acceleration or rotation to improve the location when using it as a route guidance system. There are two questions that have to be answered. At first, how does one decide about the confidence in the provided sensor data on spatial and temporal performance and secondly how can they be joined to improve the overall prediction.

In this project, we aim to replicate the behavior from the example above for motion capture and its application. This is beneficial where spatial drift and offset as well as latency are causing problems, like VR and AR applications. The first approach of MoSeF is going fuse data coming from an optical motion capture system together with positional data from a virtual reality headset by registering their two data streams together. But it will not stop there. In a later stage, it is going to evolve towards a generalized engine, filtering multiple data streams at once and providing the adjusted data to the application above. The goal is, to refine physical data, where necessary, by adding sensors to the setup and pipe their data stream into the fusion algorithms.

Tasks

The project will focus on the following tasks:

Prerequisites

Optional


Contact Persons at the University Würzburg

Marc Erich Latoschik
Mensch-Computer-Interaktion, Universität Würzburg
marc.latoschik@uni-wuerzburg.de

Sebastian Oberdörfer
Mensch-Computer-Interaktion, Universität Würzburg
sebastian.oberdoerfer@uni-wuerzburg.de

Matthias Popp (Primary Contact Person)
Mensch-Computer-Interaktion, Universität Würzburg
matthias.popp@uni-wuerzburg.de

Legal Information