Banner Banner

Welcome Prof. Dr. Stefan Haufe

Machine learning and inverse modeling for medical time series

Welcome to new BIFOLD Fellow Prof. Dr. Stefan Haufe

A warm welcome to Prof. Dr. Stefan Haufe as a new BIFOLD Fellow. Stefan Haufe is a professor of computer science and, since 2021, the head of the "Uncertainty, Inverse Modeling and Machine Learning" (UNIML) group at Technische Universität Berlin. This is a joint appointment with the Physikalisch-Technische Bundesanstalt Berlin (PTB), where he also leads the working group "Machine Learning and Uncertainty". Additionally, he heads the European Research Council (ERC) funded Braindata Group at Charité - Universitätsmedizin Berlin.
His research focuses on the development and validation of signal processing, inverse modeling, and ML techniques for neuroimaging and other medical data. He completed his PhD from 2006 to 2011 in the group of BIFOLD Co-director Prof. Dr. Klaus-Robert Müller and subsequently held various Postdoc positions in USA, and TU Berlin before receiving an ERC Starting Grant in 2019, with which he established a research group at Charité - Universitätsmedizin Berlin.

"My research can be roughly divided into three areas. In connection with the ERC Grant we are dealing with the problem of robust estimation of functional interaction/communication between brain regions from non-invasively derived brainwave measurements. These functional signatures could provide insights into what pathological changes occur in the brains of patients with neurological or psychiatric disorders. In another research stream we investigate clinical routine data, such as those collected in ITUs. These include time series of various sensor data, lab data, intervention data like medication administration, etc.. We explore how to identify patterns in these data using ML techniques to predict potential complications such as sepsis or delirium. In a further step, we want to provide recommendations, e.g. on medication or which anesthetic should be used for individual patients", explains Stefan Haufe.
As part of his work at PTB, Germany’s National Metrology Institute, Stefan Haufe also looks into ways to provide quality control for AI systems with a focus on medicine, his third research area. "For example, we examine uncertainty estimates of algorithms and check how accurate these estimates really are. Similarly, we benchmark the correctness of outputs of so-called explainable AI methods. Ultimately, these and related aspects such as robustness and fairness are considered crucial for the certification of AI systems in high-risk domains such as medicine" says Stefan Haufe. Within BIFOLD he sees interesting intersections for both topics.

One possibility to meet him in person and learn more about his research and/or possible fields for collaboration is the upcoming BIFOLD Colloquium “Machine learning and inverse modeling for medical time series/What problems does explainable AI solve?” on July 24th, 2024.