Human-Computer Interaction

Research Projects


AIL AT WORK is a project concerned with analysing, evaluating and developing AI simulations in a working context. It combines the expertise of computer science (Marc Erich Latoschik), human-computer interaction (Carolin Wienrich) and psychology (Astrid Carolus).
femtoPro is an interactive femtosecond laser laboratory simulator in virtual reality. In it, users follow step-by-step instructions and learn to set up and handle complex laser experiments eye-safely.
HistKI: AI-based support of image source research and criticism
The project HistKI aims to explore the support and modelling of image source research and criticism as a complex and fundamental historical-scientific working technique through multimodal AI-based procedures.
In the project VIA-VR, we aim to create a framework for modeling and staging strongly engaging VR adventures for medical training, prevention, therapy and rehabilitation.
Beyond safety and efficiency in acute care: The experience of an embodied staff-environment interaction
In this project we are exploring human-centered pervasive interaction between staff and technology in acute care.
Wassermühle VR
Wassermühle VR is a virtual learning environment for learning about history. The virtual environment is based on a medieval watermill and teaches users about the life of people of that time.
Games Engineering Life Sciences Initiative
The Games Engineering Life Sciences Initiative is taking on the challenges in - and briding gaps for - real-time interactive systems for biology and medicine.
Horst - The Teaching Frog
Horst - The Teaching Frog is a tangible AR system targeting the learning of a frog’s Anatomy. The learning environment is based on a plushfrog containing removable markers. Detecting the markers, replaces them with 3D models of the organs.
VRescue is a virtual reality application for the training of first aid measures in a controlled digital environment.
LEED - Die Zukunft des MINT Lernens
The project aims at the use of Virtual (VR) and Augmented Reality (AR) in STEM teaching and at exploring the pedagogical potentials of Tangible AR.
In this project we investigate Virtual Reality Exposure Therapy for people who stutter. The focus is a prototype presenting a virtual audience.
GAL - Generating asemic languages using deep learning
Winner in the Hochschulwettbewerb of the German Science Year 2019 - Artificial Intelligence. The project is dedicated to the generation of languages without semantic meaning.
Virtual Audiences
A New Tool for Controlling Audiences in VR for Training and Therapy
ILAST - Immersive Leg Coordination And Strength Therapy
ILAST is a prototype of an immersive Virtual Reality (VR) training system for post-operative therapy treatment after knee injuries. The training system uses basic game-like experiences simulated in fully immersive virtual environments which motivate patients to perform dedicated movement tasks mobilizing and strengthening their lower limbs.
Exploring and Modifying the Sense of Time in Virtual Environments.
Interactive OPERA
Interactive virtual environments for Objective evaluation of Psychophysical Effects based on bRain Activity.
Digitalisierungszentrum Präzisions- und Telemedizin (DZ.PTM)
This project serves to establish the structure of the DZ.PTM Würzburg–Bad Kissingen as a virtual centre with three locations designated to develop, test and implement digitization projects across Bavaria to support patient care and research.
This project investigate how Virtual Reality can support motor-rehabilitation of patients suffering from stroke consequences.
An Immersive and Gamified Virtual Reality Rehabilitation System to Increase Motivation During Gait Rehabilitation
Embodiment Lab
Embodied multimodal interfaces and digital humans become increasingly interesting for novel human-computer interface paradigms. The EmbodimentLab establishes a Bavarian competence center for the creative development of related application use cases and the necessary technology involved.
The project aims at the development of teaching sequences that can be flexibly combined to form new courses or integrated into existing courses to expand competencies with regard to using VR and AR.
Breaking Bad Behaviors
A New Tool for Learning Classroom Management Using Virtual Reality
Interactive Memories (InterMem)
InterMem explores the usefulness of multimodal and multimedia interfaces with an increased perceptual coupling to strengthen the positive effects of biography work with patients suffering from dementia.
HistStadt4D: 'Multimodal access to historic image repositories to support the research and communication of city and architectural history' is a BMBF-funded junior scientist group. The research group investigates and develops methodical and technological approaches in order to merge, structure and annotate images in media repositories and additional information related to their place and time.
Dimensions of Virtual Body Ownership
Users experience the perception of virtual bodies in immersive and semi immersive virtual environments.
GEtiT (Gamified Training Environment for Affine Transformations) achieves an interactive gamified 3D-training of affine transformations by requiring users to apply their affine transformation knowledge in order to solve challenging puzzles presented in an immersive and intuitive 3D environment.
XRoads explores novel and multimodal interaction techniques for tabletop games. It focusses Mixed Reality platforms that combine touch, speech, and gestures as input modalities for turn-based and real-time strategy games.
Multimodal interfaces (MMIs) are a promising alternative human-computer interaction paradigm. They are feasible for a wide rang of environments, yet they are especially suited if interactions are spatially and temporally grounded with an environment in which the user is (physically) situated, like virtual reality, mixed reality, human-robot interaction, and computer games.
CaveUDK is a high-level VR middleware based on one of the most successful commercial game engines: the Unreal® Engine 3.0 (UE3). It is a VR framework implemented as an extension to the Unreal® Development Kit (UDK) supporting CAVE-like installations.
Simulator X
Simulator X is a research testbed for novel software techniques and architectures for Real-Time Interactive Systems in VR, AR, MR, and computer games. It uses the central concept of semantic reflection based on a highly concurrent actor model to build intelligent multimodal graphics systems.
SEARIS (Software Techniques and Architectures for Real-Time Interactive Systems) is an international research collaboration founded in 2007. Its goal is to advance the field of RIS software engineering.
SIRIS (Semantic Reflection for Intelligent Realtime Interactive Systems) is a research project which explores novel software architectures for Virtual, Augmented, Mixed Reality and computer games and similar domains.
Games and Interactive Media
Research and education collaboration during the time at the HTW Berlin. The project is now continued in several new activities.
SCIVE (Simulation Core for Intelligent Virtual Environments) explores software techniques combining Artificial Intelligence (AI) methods with Virtual and Augmented Reality (VR/AR).
PASION (Psychological Augmented Social Interaction Over Networks) explores communication and collaboration for social groups using immersive and mobile displays augmented by implicit communication signals (i.e., bio sensors)
AI & VR Lab
The AI & VR Lab of Bielefeld University founded by Prof. Wachsmuth and headed by Prof. Latoschik hosted several novel projects in the area of intelligent graphics and intelligent Virtual Environments.
Virtuelle Werkstatt (Virtual Workplace)
The project's goal is the development of a demonstration platform for Virtual-Reality-based prototyping using multimodal (gesture and speech) interaction metaphors.
We have contributed to the MAX project (Multimodal Assembly eXpert) by a port to an immersive environment and the implementation of multimodal speech and gesture input.
DEIKON (DEixis In KonstruktionsDialogen), part of the SFB 360, explores the utilization of deictic expressions in gesture and speech as input methods for construction scenarios.
SGIM (Speech and Gesture Interfaces for Multimedia) develops techniques for communicating with multimedia systems through the detection and interpretation of a user's verbal (speech) and coarse gestural input.
VIENA (Virtual Environments and Agents) explored the use of a multi agent system to capture natural language and coarse pointing gestures to interact with an interior design application.
Legal Information