Banner Banner
Icon

January 29, 2025

Thomas Schnake

Researcher Spotlight: Thomas Schnake

Dr. Thomas Schnake recently completed his PhD on "Developing Model-Aligned and Human-Readable Explanations for Artificial Intelligence," earning the distinction of 'Summa Cum Laude' for his thesis. He is currently a postdoctoral researcher at BIFOLD and the Machine Learning Research Group at TU Berlin. His research interests lie in the field of explainable artificial intelligence (XAI), where he focuses on creating explanation methods that both accurately represent model behavior and remain intuitive and accessible to humans. Additionally, Thomas explores how XAI can unlock novel scientific insights by revealing the underlying mechanisms that allow machine learning models to outperform human abilities in specific tasks. 

Please describe and explain your research focus.
Thomas: In my research, I focus on developing explanation methods that help understand the decision-making processes of AI models. My goal is to represent these complex processes in terms that are comprehensible to humans, translating intricate strategies into logical abstractions of decisions.

While humans make decisions by combining concepts and knowledge logically, many machine learning models rely on numerous functions and dependencies, often represented by artificial neural networks. My framework aims to clarify these complex decisions and highlight the significance of specific logical relationships in the model.

I am part of a team that has developed an explainable AI (XAI) method called “Symbolic XAI”. This algorithm is designed to reveal how a machine learning model draws its logical conclusions. To test the “Symbolic XAI” we studied a machine learning model that predicts whether a movie review is positive or negative. This can be tricky when reviews contain combinations like “The movie was not good”.
Traditional XAI methods highlight both ‘not’ and ‘good’ as important. However, this parallel evaluation is an oversimplification that does not reflect how these words influence each other. Understanding this mutual effect is crucial, as ‘not’ and ‘good,’ when considered individually, would play entirely different roles in the sentence. Without examining their interplay, it remains unclear whether the model is leveraging this negation of a positive word in its prediction.
“Symbolic XAI”, on the other hand, follows the logic how humans make decisions. A person knows that the combination of 'not' and 'good' determines the negative feeling. And so “Symbolic XAI” also emphasizes the combination of 'not' and 'good' as a negative sentiment. This basis, “Symbolic XAI” is able to make the machine learning model's decision-making transparent.

Which major innovation do you expect in your research field in the next ten years?
Thomas: I believe explainable artificial intelligence will drive innovation in two key areas over the next decade. First, it can accelerate scientific discoveries by revealing how machine learning models achieve results, enabling us to learn and make better decisions. 

Second, explainable AI is vital for enhancing machine learning models. By understanding decision-making processes, we can refine training data and adjust model architecture, leading to more accurate, robust, and trustworthy outcomes.

What personally motivated you to enter this specific research-field?
Thomas: I was always good at math, which is why I chose to study it. I started with mathematics and later pursued a PhD in machine learning. My motivation lies in applying mathematical principles across various fields. Machine learning perfectly combines mathematical challenges with interdisciplinary applications, necessitating collaboration with experts from diverse domains, which I find particularly exciting.

What's going to be next in your career: What are your recent projects or future projects?
Thomas: In my future projects, I plan to focus on application-oriented explanation methods and machine learning, specifically applying the techniques I developed during my PhD in fields like medicine, quantum chemistry, and natural language processing. My goal is to leverage these algorithms for scientific discoveries. Additionally, I aim to pursue a postdoc position at the Vector Institute in Canada.

Which living or historical scientist has fascinated you and why?
Thomas: I'm fascinated by the Nobel Prize Committee's decision to award the Physics and Chemistry prizes to AI scientists. This recognition highlights the significance of AI research in society and science, signaling that our work truly matters. I aim to contribute positively to the scientific community through my research.

AI is considered a disruptive technology - in which areas of life do you expect the greatest upheaval in the next ten years?
Thomas: I hope AI will positively impact several areas. I expect it to accelerate medical advancements by creating patient-specific treatments, streamline bureaucracy in companies and government, and help address the skilled labor shortage in Germany by automating repetitive tasks. This would allow workers to concentrate on more complex and creative activities, boosting overall productivity.

Where would one find you, if you are not sitting in front of the computer?
Thomas: If I’m not in front of my computer, you will likely find me engaged in one of my hobbies. I might be at the gym, cycling, or pursuing voluntary work or political engagement. You could also find me playing the guitar, composing, or performing songs that I’ve created.