AIL AT WORK

Motivation & Goal
How can we use AI in the context of everyday work? How can people understand the AI they are working with? How can we foster this understanding? The project AIL AT WORK strives to answer these questions from a human-centered perspective. In an interdisciplinary cooperation we delve into the cognitive, social and emotional requirements that arise when AI is used in real-life work with human users in a social space. Together with industry partners, we develop new and modular measurements for AI Literacy (AIL) and evaluate them either as simulated AI-prototypes or in the field. For this evaluation, we make use of novel approaches such as eXTended AI and XR-testbeds.
The project was extended until 2026, with a new focus on creating accessible open source tools for embodied AI. Using these tools, users can easily test out a wide array of embodied AIs in their own work contexts and easily integrate them in their products.
Project Results
Meta AI Literacy Scale (MAILS)
In the face of AI anxiety, algorithmic aversion and the risk of being replaced by AI in the workplace, it is important to know your own AI literacy (i.e. competence when using AI) and the AI literacy of (potential) employees. Valid measurement of AI literacy is important for selecting personnel, identifying skills and knowledge shortages, and evaluating AI literacy interventions.
We developed the Meta-Artificial Intelligence Literacy Scale (MAILS) to solve this problem: it is a self-assessment questionnaire for the economic assessment of AI literacy. The factorial structure was confirmed and further validated in exploratory studies. The scale measures the abilities to Use & apply AI, Understand AI, Detect AI, make ethical considerations in regard to AI (i.e. AI Ethics), Create AI, and AI Self-efficacy in learning and problem-solving and AI Self-management (i.e. AI persuasion literacy and emotion regulation).
>>> Online Version of the MAILS <<<
Publications
Click to show...
- Martin J. Koch, Astrid Carolus, Carolin Wienrich, Marc Erich Latoschik, Meta AI Literacy Scale: Further validation and development of a short version, In Heliyon, p. 23. 2024.
- Martin J. Koch, Carolin Wienrich, Samantha Straka, Marc Erich Latoschik, Astrid Carolus, Overview and confirmatory and exploratory factor analysis of AI literacy scale, In Computers and Education: Artificial Intelligence, Vol. 7, p. 100310. Elsevier BV, 2024.
- Astrid Carolus, Martin J. Koch, Samantha Straka, Marc Erich Latoschik, Carolin Wienrich, MAILS - Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies, In Computers in Human Behavior: Artificial Humans, Vol. 1 (2), p. 100014. 2023.
- Astrid Carolus, Yannick Augustin, André Markus, Carolin Wienrich, Digital Interaction Literacy Model – Conceptualizing competencies for literate interactions with voice-based AI systems, In Computers and Education: Artificial Intelligence, 4, 100114, 2023.
- Carolin Wienrich, Astrid Carolus, André Markus, Yannick Augustin, AI Literacy: Kompetenzdimensionen und Einflussfaktoren im Kontext von Arbeit, 2022.
Training AI Literacy with Embodied AI
Employers might seek methods to improve their employees’ AI literacy to increase trust in AI and reduce the risk of AI misuse in the workplace. To close this gap, we first developed a serious game that teaches students basic machine learning concepts and how neural networks are developed. A preliminary evaluation found that an embodied AI agent can not only further assist students in understanding the concepts, but can also improve knowledge retention.
Our future work will build on existing materials from the MOTIV Project and its training platform to develop advanced materials for training AI literacy in academic and work environments alike, and further investigate the use of embodied AI agents to improve learning outcomes. When evaluated, training material will be available to everyone planning to improve the AI literacy of their employees.
Publications
Click to show...
- André Markus, Maximilian Baumann, Jan Pfister, Astrid Carolus, Andreas Hotho, Carolin Wienrich, Safer Interaction with IVAs: The Impact of Privacy Literacy Training on Competent Use of Intelligent Voice Assistants, In Computers and Education: Artificial Intelligence, 100372, 2025.
- Maximilian Baumann, André Markus, Jan Pfister, Astrid Carolus, Andreas Hotho, Carolin Wienrich, Master your practice! A quantitative analysis of device and system handling training to enable competent interactions with intelligent voice assistants, In Computers in Human Behavior Reports, 17, 100610, 2025.
- André Markus, Jan Pfister, Astrid Carolus, Andreas Hotho, Carolin Wienrich, Effects of AI understanding-training on AI literacy, usage, self-determined interactions, and anthropomorphization with voice assistants, In Computers and Education Open, 6, 100176, 2024.
- André Markus, Jan Pfister, Astrid Carolus, Andreas Hotho, Carolin Wienrich, Empower the user – The impact of functional understanding training on usage, social perception, and self-determined interactions with intelligent voice assistants, In Computers and Education: Artificial Intelligence, 2024.
- Philipp Krop, Sebastian Oberdörfer, Marc Erich Latoschik, Traversing the Pass: Improving the Knowledge Retention of Serious Games Using a Pedagogical Agent, In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents. Würzburg, Germany: Association for Computing Machinery, 2023.
- Carolin Wienrich, Marc Erich Latoschik, eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research, In Frontiers in Virtual Reality, Vol. 2, p. 94. 2021.
Design Guidelines for Trustworthy AI in Work Environments
AI in the workplace is often mistrusted, resulting in users not using expensive AI tools. Next to uncertainty about the AI collecting private data, the reasons for this effect need to be clarified further. Thus, the question arises: What constitutes trust in AI? How can AI be designed to be trustworthy and accepted in the workplace?
To answer these questions, we conducted a series of studies investigating different designs for embodied AI. Our results indicate that, although the perfect AI companion in the workplace is highly individual, it is often drawn as a humanoid figure, a robot, or a piece of hardware. In addition, some general rules can be derived:
- The embodiment of an AI agent should be congruent to the task and environment to create a basis for trustworthy interaction.
- The embodied AI’s perceived competence plays a crucial role in trusting and accepting an AI. This effect is already built when meeting the AI for the first time, regardless of the quality of the interaction.
- AI that is perceived as more humanlike evokes more trust than AI-like embodied AI agents.
Thus, designers should strive to create human-like embodied AI agents and evaluate if they are perceived as competent and congruent to the task.
Currently, we are conducting further studies to deepen our understanding of factors that influence how embodied AI is perceived. We are investigating the effect of personal characteristics of Germany’s working population and the AI’s behavior and visual design on the embodied AI’s perception. Further, we investigate how the sole presence of embodied AI agents can faciliate behavior in office contexts, and how they can support work in the medical field. We aim to create a comprehensive overview of factors influencing how embodied AI is perceived, how these differ in various work environments, and to derive a valuable set of guidelines for developers to create trustworthy embodied AI in work contexts.
Publications
Click to show...
- Philipp Krop, Martin J. Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich, The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI, In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), p. 9. New York, NY, USA: ACM, 2024.
- André Markus, Jan Pfister, Astrid Carolus, Andreas Hotho, Carolin Wienrich, Empower the user – The impact of functional understanding training on usage, social perception, and self-determined interactions with intelligent voice assistants, In Computers and Education: Artificial Intelligence, 2024.
- Samantha Straka, Martin Jakobus Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich, How Do Employees Imagine AI They Want to Work with: A Drawing Study, In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2023.
- Carolin Wienrich, Marc Erich Latoschik, eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research, In Frontiers in Virtual Reality, Vol. 2, p. 94. 2021.
Development of Open Source Tools for Emdodied AI
Even companies with high AI literacy often encounter challenges or lack the necessary resources to develop their own embodied AI systems. To address this, we are developing a suite of easily accessible open source tools. These tools will allow companies and researchers to seamlessly integrate embodied AI systems into their workflows. We are currently setting up the necessary infrastructure and develop an open source framework based on RealityStack, so that anyone can configure and implement their own systems. The first version of the tool will be available soon.
Publications
Coming soon!
Experiencing Future Working Contexts with Embodied AI
Academia can provide valuable insights into the populace’s AI literacy and methods employers could use to improve AI literacy in their companies. Unfortunately, these insights often fail to reach employers and stay buried in academia. To change this, we developed two demonstrators showcasing how embodied AI can enhance future work contexts and presented them at various public events.
In the demonstrator Example Work Contexts with Embodied AI, we display three different work environments (clinic, factory, and office) with the help of virtual reality, where users have to complete a task with an AI coworker. This way, users can already explore how work tasks could be enhanced with embodied AI agents in the future, and how the embodiment of an AI can influence how capable it is perceived (for example, one would rather trust a robot to help them repair a robot arm than a doctor). In our second demonstrator, Your Coworker ChatGPT, we use mixed reality to display an embodied version of ChatGPT in the user’s office. Using a state-of-the-art pipeline to create virtual replicas of humans and AI voice generation, users can talk to a digital version of GPT looking and talking like themselves to get a glimpse of the possibilities and dangers of embodied AI in the workplace.
Publications
Click to show...
-
Philipp Krop, David Obremski, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich, My Co-worker ChatGPT: Development of an XR Application for Embodied Artificial Intelligence in Work Environments, In 2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW). IEEE Computer Science, 2025. To be published.
Impressions
Former Teammembers
Click to show...
- Thomas Proksch, M.Sc.
- Samantha Straka, M.Sc.
- Fabienne Uelinn, B.Sc.
- Fabian Machalett, B.Sc.
- Felix Foschum, B.Sc.
- Dr. Martin J. Koch
- Maximilian Baumann, B.Sc.
News





Theses and projects
Assigned



Closed


Funding and Collaboration



