2026
Murat Yalcin, Marc Erich Latoschik,
End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach
, In
Frontiers in Physiology
.
2026.
To be published
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{yalcin2026endtoend,
title = {End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach},
author = {Yalcin, Murat and Latoschik, Marc Erich},
journal = {Frontiers in Physiology},
year = {2026},
note = {To be published},
url = {https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2026.1694995/abstract}
}
Abstract: Electrocardiogram (ECG) signals are frequently utilized for detecting important cardiac events, such as variations in ECG intervals, as well as for monitoring essential physiological metrics, including heart rate (HR) and heart rate variability (HRV). However, the accurate measurement of ECG traditionally requires a clinical environment, thereby limiting its feasibility for continuous, everyday monitoring. In contrast, Photoplethysmography (PPG) offers a non-invasive, cost-effective optical method for capturing cardiac data in daily settings and is increasingly utilized in various clinical and commercial wearable devices. However, PPG measurements are significantly less detailed than those of ECG. In this study, we propose a novel approach to synthesize ECG signals from PPG signals, facilitating the generation of robust ECG waveforms using a simple, unobtrusive wearable setup. Our approach utilizes a Transformer-based Generative Adversarial Network model, designed to accurately capture ECG signal patterns and enhance generalization capabilities. Additionally, we incorporate self-supervised learning techniques to enable the model to learn diverse ECG patterns through specific tasks. Model performance is evaluated using various metrics, including heart rate calculation and root minimum squared error (RMSE) on two different datasets. The comprehensive performance analysis demonstrates that our model exhibits superior efficacy in generating accurate ECG signals (with reducing 83.9\% and 72.4\% of the heart rate calculation error on MIMIC III and Who is Alyx? datasets, respectively), suggesting its potential application in the healthcare domain to enhance heart rate prediction and overall cardiac monitoring. As an empirical proof of concept, we also present an Atrial Fibrillation (AF) detection task, showcasing the practical utility of the generated ECG signals for cardiac diagnostic applications. To encourage replicability and reuse in future ECG generation studies, we have shared the dataset and will also make the code as publicly available.
2024
Murat Yalcin, Andreas Halbig, Martin Fischbach, Marc Erich Latoschik,
Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors
, In
Frontiers in Virtual Reality
, Vol.
5
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2024.1364207,
title = {Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors},
author = {Yalcin, Murat and Halbig, Andreas and Fischbach, Martin and Latoschik, Marc Erich},
journal = {Frontiers in Virtual Reality},
year = {2024},
volume = {5},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1364207},
doi = {10.3389/frvir.2024.1364207}
}
Abstract: Cybersickness is still a prominent risk factor potentially affecting the usability of virtual reality applications. Automated real-time detection of cybersickness promises to support a better general understanding of the phenomena and to avoid and counteract its occurrence. It could be used to facilitate application optimization, that is, to systematically link potential causes (technical development and conceptual design decisions) to cybersickness in closed-loop user-centered development cycles. In addition, it could be used to monitor, warn, and hence safeguard users against any onset of cybersickness during a virtual reality exposure, especially in healthcare applications. This article presents a novel real-time-capable cybersickness detection method by deep learning of augmented physiological data. In contrast to related preliminary work, we are exploring a unique combination of mid-immersion ground truth elicitation, an unobtrusive wireless setup, and moderate training performance requirements. We developed a proof-of-concept prototype to compare (combinations of) convolutional neural networks, long short-term memory, and support vector machines with respect to detection performance. We demonstrate that the use of a conditional generative adversarial network-based data augmentation technique increases detection performance significantly and showcase the feasibility of real-time cybersickness detection in a genuine application example. Finally, a comprehensive performance analysis demonstrates that a four-layered bidirectional long short-term memory network with the developed data augmentation delivers superior performance (91.1% F1-score) for real-time cybersickness detection. To encourage replicability and reuse in future cybersickness studies, we released the code and the dataset as publicly available.
Sophia C. Steinhaeusser, Elisabeth Ganal, Murat Yalcin, Marc Erich Latoschik, Birgit Lugrin,
Binded to the Lights – Storytelling with a Physically Embodied and a Virtual Robot using Emotionally Adapted Lights
, In
2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN)
, pp. 2117-2124
.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@proceedings{10731419,
title = {Binded to the Lights – Storytelling with a Physically Embodied and a Virtual Robot using Emotionally Adapted Lights},
author = {Steinhaeusser, Sophia C. and Ganal, Elisabeth and Yalcin, Murat and Latoschik, Marc Erich and Lugrin, Birgit},
booktitle = {2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN)},
year = {2024},
pages = {2117-2124},
url = {},
doi = {10.1109/RO-MAN60168.2024.10731419}
}
Abstract: Virtual environments (VEs) can be designed to evoke specific emotions for example by using colored light, not only applicable for games but also for virtual storytelling with a single storyteller. Social robots are perfectly suited as storytellers due to their multimodality. However, there is no research yet on the transferability of robotic storytelling to virtual reality (VR). In addition, the transfer of concepts from VE design such as adaptive room illumination to robotic storytelling has yet not been tested. Thus, we conducted a study comparing the same robotic storytelling with a physically embodied robotic storyteller and in VR to investigate the transferability of robotic storytelling to VR. As a second factor, we manipulated the room light following design guidelines for VEs or kept it constant. Results show that a virtual robotic storyteller is not perceived worse than a physically embodied storyteller, suggesting the applicability of virtual static robotic storytellers. Regarding emotion-driven lighting, no significant effect of colored lights on self-reported emotions was found, but adding colored light increased the social presence of the robot and its’ perceived competence in both VR and reality. As our study was limited by a static robotic storyteller not using bodily expressiveness future work is needed to investigate the interaction between well-researched robot modalities and the rather new modality of colored light based on our results.
Murat Yalcin, Marc Erich Latoschik,
DeepFear: Game Usage within Virtual Reality to Provoke Physiological Responses of Fear
, In
Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
, p. 1–8
.
New York, NY, USA
:
Association for Computing Machinery
, 2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{Yalcin2024,
title = {DeepFear: Game Usage within Virtual Reality to Provoke Physiological Responses of Fear},
author = {Yalcin, Murat and Latoschik, Marc Erich},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
year = {2024},
pages = {1–8},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3650877},
doi = {10.1145/3613905.3650877}
}
Abstract: The investigation and the classification of the physiological signals involved in fear perception is complicated due to the difficulties in reliably eliciting and measuring the complex construct of fear. Especially, using Virtual Reality (VR) games can well elicit the physiological responses, then it can be used developing treatments in healthcare domain. In this study, we carried out exploratory physiological data analysis and wearable sensory device feasibility for the responses of fear. We contributed 1) to use a of-the-shelf commercial game (Half Life-Alyx) to provoke fear emotion, 2) to demonstrate a performance analysis with different deep learning models like Convolutional Neural Network (CNN), Long-Short Term Memory (LSTM) and Transformer, 3) to investigate the most responsive physiological signal by comprehensive data analysis and best sensory device in terms of multi-level of fear classification. Accuracy metrics, f1-scores and confusion matrices showed that ECG and ACC are the most significant two signals for fear recognition.
2023
Christian Rack, Tamara Fernando, Murat Yalcin, Andreas Hotho, Marc Erich Latoschik,
Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR
, In
Frontiers in Virtual Reality
David Swapp (Ed.),
, Vol.
4
.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{rack2023behavioral,
title = {Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR},
author = {Rack, Christian and Fernando, Tamara and Yalcin, Murat and Hotho, Andreas and Latoschik, Marc Erich},
editor = {Swapp, David},
journal = {Frontiers in Virtual Reality},
year = {2023},
volume = {4},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2023.1272234/full},
doi = {10.3389/frvir.2023.1272234}
}
Abstract: This article presents a new dataset containing motion and physiological data of users playing the game 'Half-Life: Alyx'. The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx
2022
Christian Rack, Fabian Sieper, Lukas Schach, Murat Yalcin, Marc E. Latoschik,
Dataset: Who is Alyx? (GitHub Repository)
.
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@dataset{who_is_alyx_2022,
title = {Dataset: Who is Alyx? (GitHub Repository)},
author = {Rack, Christian and Sieper, Fabian and Schach, Lukas and Yalcin, Murat and Latoschik, Marc E.},
year = {2022},
url = {https://github.com/cschell/who-is-alyx},
doi = {10.5281/zenodo.6472417}
}
Abstract: This dataset contains over 110 hours of motion, eye-tracking and physiological data from 71 players of the virtual reality game “Half-Life: Alyx”. Each player played the game on two separate days for about 45 minutes using a HTC Vive Pro.