Beyond Explainable AI

Home >

Beyond Explainable AI

Wojciech Samek and Klaus-Robert Mueller published new book on Explainable AI

To tap the full potential of artificial intelligence, not only do we need to understand the decisions it makes, these insights must also be made applicable. This is the aim of the new book “xxAI – Beyond Explainable AI”, edited by Wojciech Samek, head of the Artificial Intelligence department at the Fraunhofer Heinrich Hertz Institute (HHI) and BIFOLD researcher and Klaus-Robert Mueller, professor of machine learning at the Technical University of Berlin (TUB) and co-director at BIFOLD. The publication is based on a workshop held during the International Conference on Machine Learning in 2020. Co-editors also include AI experts Andreas Holzinger, Randy Goebel, Ruth Fong and Taesep Moon. It is already the second publication by Samek and Mueller.

Following the great resonance of the editors’ first book, “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning” (2019), which presented an overview of methods and applications of Explainable AI (XAI) and racked up over 300,000 downloads worldwide, their new publication goes a step further. It provides an overview of current trends and developments in the field of XAI. In one chapter, for example, Samek and Mueller’s team shows that XAI concepts and methods developed for explaining classification problems can also be applied to other types of problems. When solving classification problems, the target variables sought are categorical, such as “What color is the traffic light right now, red, yellow, or green?”. XAI techniques for solving these problems can help explain problems in unsupervised learning, reinforcement learning, or generative models. Thus, the authors expand the horizons of previous XAI research and provide researchers and developers with a set of new tools that can be used to explain a whole new range of problem types and models.

The book is available free of charge.
C: Fraunhofer HHI

As the title “Beyond Explainable AI” suggests, the book also highlights solutions regarding the practical application of insights from methodological aspects to make models more robust and efficient. While previous research has focused on the process from AI as a “black box” to explaining its decisions, several chapters in the new book address the next step, toward an improved AI model. Furthermore, other authors reflect on their research not only in their own field of work, but also in the context of society as a whole. They cover a variety of areas that go far beyond classical XAI research. For example, they address the relationships between explainability and fairness, explainability and causality, and legal aspects of explainability.

The book is available free of charge here.

New professorship for Machine Learning and Communications

Home >

New professorship for Machine Learning and Communications

“I want to move beyond purely ‘explaining’ AI”

BIFOLD researcher Dr. Wojciech Samek has been appointed Professor of Machine Learning and Communications at TU Berlin with effect from 1 May 2022. Professor Samek heads the Department of Artificial Intelligence at the Fraunhofer Heinrich-Hertz-Institute.

Prof. Wojciech Samek receives his certificate of appointment from the President of TU Berlin, Prof. Geraldine Rauch.
C: private


Professor Samek – very warm congratulations on your appointment at TU Berlin. You have been involved in BIFOLD – Berlin’s AI competence center – since 2014. What connects you to BIFOLD?

“I have supported BIFOLD and its predecessor projects to the best of my abilities since the very beginning. My cooperation with BIFOLD is very important to me – as is strengthening Berlin’s role as a center of artificial intelligence (AI). I have worked very closely and successfully with BIFOLD Co-Director Professor Klaus-Robert Müller on the explainability of artificial intelligence. I have also enjoyed a successful cooperation with Professor Thomas Wiegand working at the interface between machine learning (ML) and compression. In addition to the 12 patent registrations resulting from these and other collaborative undertakings with researchers at TU Berlin, my collaboration with Thomas Wiegand has had a considerable influence on the new international MPEG-7 NNR standard on the compression of neural networks.”

What areas would you like to focus on in your future research at TU Berlin/BIFOLD?

“My goal is to further develop three areas: explainability and trustworthiness of artificial intelligence, the compression of neural networks, and so-called federated leaning. I aim to focus on the practical, methodological, and theoretical aspects of machine learning at the interface to other areas of application. The combination of machine learning and communications is something I find particularly interesting. In my research, I use explainability to improve ML models and to make them more robust. So, I am looking to move beyond purely “explaining.” My goal is to increase reliability and ultimately develop explanations based on checking mechanisms for ML models.”

How does this research complement the research work of BIFOLD and TU Berlin?

“These research areas are of great importance not only for BIFOLD with its focus on ’responsible AI’ and ‘communications’ as an area of application; they also offer a whole range of possibilities for collaboration with researchers at Faculty IV. Klaus-Robert Müller, Grégoire Montavon, and I are planning to transfer the explainability concepts we developed to other types of ML models and areas such as regression, segmentation, and clustering. Enhanced visualizations of explanations will enable us to develop more reliable and more explainable ML models for communication applications. My research will also focus on the development of enhanced compression methods for neural networks and federated learning. Examining the interaction between learning, compression, and communications is of great interest here. Overall, we can say that my research will strengthen the research profile of Faculty IV while at the same time offering very many new possibilities for cooperation at the interface between ML, information theory, signal compression, and communications.”

Do you already have an idea of what you would like to implement in teaching?

“I am really looking forward to the new challenges that teaching will bring. Within BIFOLD, I am planning a special course titled “Advanced Course on Machine Learning & Communications.” This would focus on federated learning, methods for the compression of neural networks, efficient neural architectures, and ML methods for communication applications. The lectures could also be supplemented by a lab. What I envisage is the implementation of federated learning on many networked devices. This would provide hands-on opportunities for students to learn directly about topics such as resource-saving ML, compression of neural networks, and federated learning. I would also be very interested in giving a course on explainable and trustworthy AI. A BIFOLD special course, a master’s/bachelor’s seminar, or a supplementary module to the bachelor’s course on cognitive algorithms would all provide suitable formats for this.”

Summer School Information Event

Home >

Summer School Information Event

Summer School Information Event

Date and Time: Monday, 25. April, 2022; 4:00 pm

Speaker: Andrea Hamm, Martin Schuessler, Dr Stefan Ullrich

Venue: virtual event

Participation: If you are interested in participating please contact: gs@bifold.berlin

Andrea Hamm and Martin Schuessler, supported by Dr Stefan Ullrich, will present the program of the BIFOLD Ethics in Residence’s Summer School, which will take place from 20.-24. June in a hotel around Berlin and at the Weizenbaum Institute. The Summer School complements and serves the technological research on artificial intelligence (AI) within the AI Competence Centres with aspects of ethics, explainability, and sustainability. It is organized within the Ethics in Residence program, which is realized between the Weizenbaum Institute for the Networked Society – the German Internet Institute – and the Network of the German AI Competence Centres.

The Summer School is fully funded by BIFOLD and open to all BIFOLD PhD students, and in addition those from the other centres of the German AI Competence Center Network (ML2R, MCML, TUE-AI, ScaDS, DFKI).


The program includes multiple hands-on workshops to advance the participants’ individual research projects, several high-profile international guest lectures and Q&A sessions with the guest speakers, a panel discussion, and participants’ presentation sessions for expert jury feedback. The international expert researchers and guest speakers joining have backgrounds in computing within limits, disaster research, and COVID-19 data research. In addition, the summer school offers two main tracks, one on explainable deep neural networks (XNN) and one on sustainable AI (SAI), for a more specialized training of the PhD students.

About the presenters & track leaders
Andrea Hamm
Copyright: private

Andrea Hamm is a doctoral researcher at the Weizenbaum Institute for the Networked Society and TU Berlin. In her dissertation, she investigates civic technologies for environmental monitoring in the context of making cities and communities more sustainable. She is particularly interested in the real-world effects of AI technologies, for instance to understand how AI-supported simulations contribute to reducing CO2 footprints and material consumption. She has published at international venues such as at ACM CHI Conference on Human Factors in Computing Systems, ACM ICT for Sustainability Conference, and the International Conference on Wirtschaftsinformatik WI. Her work focuses on the interdisciplinary transition from human-computer interaction (HCI) and design studies to communication studies. She is a member of the German-Japanese Society for Social Sciences and the AI Climate Change Center Berlin-Brandenburg. In 2019, she was a guest researcher at Aarhus University, Denmark, after previously studying at Freie Universität Berlin (Germany), Vrije Universiteit Brussel (Belgium), and Université Catholique de Lille (France).

Martin Schuessler is a tech-savvy, interdisciplinary human-computer interaction researcher with the belief that it is the purpose of technology to enhance people’s lives. As an undergrad, he followed this belief from a technical perspective by investigating the usability of new interaction paradigms such as tangible and spatial interfaces at the OvGU Magdeburg and Telekom Innovation Labs Berlin. As a PhD student at the interdisciplinary Weizenbaum Institute he adopted a broader often less technical perspective on the same belief (still involving a lot of programming). His dissertation work looks at ways to make computer vision systems more intelligible for users. Martin has been a visiting researcher at the University College of London Interaction Center and the Heidelberg Collaboratory for Image Processing. He has published articles at top international conference on Learning Representations (ICLR), Human Factors in Computing Systems (CHI), Computer-Supported Cooperative Work and Social Computing (CSCW) and Intelligent User Interfaces (IUI).

Martin Schüssler
Copyright: private

The Summer School as a whole and as a part of the BIFOLD Ethics in Residence Program is fostered by Dr. Stefan Ullrich.

Available PhD research topics

Home >

Available PhD research topics

Available PhD research topics

Based on the overarching research foci of BIFOLD, the BIFOLD Graduate School is offering new PhD projects in the areas of current challenges in artificial intelligence (AI) and data science (DS), with focus on data management, machine learning, and their intersection.
Below is a brief description of the current research pursued by the BIFOLD research groups, including short lists of their main topics and foci. For more details, we recommend that you look at the respective webpages of the group leads or for more detailed information.
Contact:
Please feel free to reach out to the group leads directly or to: gsapplication@bifold.tu-berlin.de – depending of the nature of your query.

Full job posting: EN / DE

BIFOLD Research Groups and their topics

The Distinguished Research Group of Volker Markl works on a wide range of topics and challenges in Database Systems and Information Management, with the overarching goal to address both the human and technical latencies prevalent in the data analysis process. The group investigates:

  • Automatic Optimization of Data Processing on Modern Hardware.
  • Automatic Optimization of Distributed ML Programs.
  • Optimization of the Data Science and ML Process.
  • Hardware-tailored Code Generation.
  • Compliant Geo-distributed Data Analytics.
  • Efficient Visualization of Big Data.
  • Scalable Gathering and Processing of Distributed Streaming Data.
  • Data Processing on Modern Hardware.
  • Scalable State Management.

The Distinguished Research Group led by Klaus-Robert Müller tackles problems within the bigger fields of Machine Learning and Intelligent Data Analysis, with the overarching goals to develop robust and interpretable ML methods for learning from complex structured and non-stationary data, and the fusion of heterogeneous multi-modal data sources. The group works on:

  • Learning from Structured, Non-stationary and Multi-modal Data.
  • Incorporating Domain Knowledge and Symmetries in ML Models.
  • Robust Explainable AI for Structured, Heterogeneous Data.
  • Structured Anomaly Detection.
  • Robust Reinforcement Learning in Complex, Partially Observed State Spaces.
  • ML Applications in the Sciences.
  • Deep Learning and GANs.

The Senior Research Group led by Begüm Demir works on Big Data Analytics for Earth Observation (EO) at the intersection of remote sensing, DM and ML. The group investigates and creates theoretical and methodological foundations of DM and ML for EO, with the goal to process and analyze a large amount of decentralized EO data in a scalable and privacy-aware manner and focuses on the following topics:

  • Privacy-preserving Analysis of EO Data.
  • Continual Learning for Large-Scale EO Data Analysis.
  • Heterogeneous Multi-Source EO Data Analysis.
  • Uncertainty-Aware Analysis of Large-Scale EO Data.

The Senior Research Group led by Frank Noé concentrates on the development of ML methods for solving fundamental problems in chemistry and physics. Currently, the group focuses on:

  • New ML Methods for Solving Fundamental Physics Problems.
  • Quantum Mechanics – Electronic Structure Problem.
  • Statistical Mechanics – Sampling Problem.
  • New ML methods Inspired by Physics.
  • Neural Network Optimization, Sampling, and Statistical Mechanics.
  • Graph Neural Networks.

The Junior Research Group led by Grégoire Montavon works on advancing Explainable AI (XAI) in the context of deep neural networks (DNNs). Its research focuses on solidifying the theoretical and algorithmic foundations of XAI for DNNs and closing the gap between existing XAI methods and practical desiderata:

  • From Explainable AI to trustworthy models.
  • From Explainable AI to actionable systems.
  • Application to historical networks and biological interaction network.

The Independent Research Group led by Jorge-A. Quiané-Ruiz looks into Big Data Systems with the goal to develop a scalable and efficient big data infrastructure that supports next-generation distributed information systems and creates an open data-related ecosystem.

  • Worldwide-scalable data processing.
  • Efficient secure data processing.
  • Reliable pricing, usage-tracing, and payment models

The Independent Research Group of Shinichi Nakajima focuses on probabilistic modelling and inference methods for multimodal, heterogeneous, and complex structured data analysis, providing ML tools that can incorporate multiple aspects of data samples observed under different circumstances, in efficient and theoretically grounded ways.

  • Generative Models and Inference Methods.
  • Applications of Generative Models and Bayesian Inference Methods.
  • Practical Uncertainty Estimation Methods.

The Research Training Group led by Steffen Zeuch works on developing a data management system for the processing of heterogeneous data streams in distributed fog and edge environments. The aim is to design a data management system that unifies cloud, fog, and sensor environments at an unprecedented scale. In particular, a system that can host these environments on a unified platform, and leverages the opportunities of the unified architecture for cross-paradigm data processing optimizations, to support emerging IoT applications.

  • Data Processing on Modern Hardware.
  • Data Processing in a Fog/Cloud Environment.

The Research Training Group led by Stefan Chmiela focuses on Machine Learning for many-body problems, with particular focus on quantum chemistry. The group develops methods that combine fundamental physical principles with statistical modeling approaches to overcome the combinatorial challenges that manifest themselves when large numbers of particles interact. Research is centered around topics such as

  • graph neural networks,
  • large-scale kernel methods and
  • the challenge of invariant/equivariant modelling.

Tracking Spooky Action at a Distance

Home >

Tracking Spooky Action at a Distance

Tracking Spooky Action at a Distance

The use of AI in classical sciences such as chemistry, physics, or mathematics remains largely uncharted territory. Researchers from the Berlin Institute for the Foundation of Learning and Data (BIFOLD) at TU Berlin and Google Research have successfully developed an algorithm to precisely and efficiently predict the potential energy state of individual molecules using quantum mechanical data. Their findings, which offer entirely new opportunities for material scientists, have now been published in the paper “SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and Nonlocal Effects” in Nature Communications. 

Naostrukturen von Molekülen
Being able to predict and model the individual steps of a chemical reaction at the molecular or even atomic level is a long-held dream of many material scientists.
(Copyright: istock.com/peterscheiber.media)

“Quantum mechanics, among other things, examines the chemical and physical properties of a molecule based on the spatial arrangement of its atoms. Chemical reactions occur based on how several molecules interact with each other and are a multidimensional process,” explains BIFOLD Co-Director Prof. Dr. Klaus-Robert Müller. Being able to predict and model the individual steps of a chemical reaction at the molecular or even atomic level is a long-held dream of many material scientists.

Every individual atom in focus

The potential energy surface, which refers to the dependence of a molecule’s energy on the arrangement of its atomic nuclei, plays a key role in chemical reactivity. Knowledge of the exact potential energy surface of a molecule allows researchers to simulate the movement of individual atoms, such as during a chemical reaction. As a result, they gain a better understanding of the atoms’ dynamic, quantum mechanical properties and can precisely predict reaction processes and outcomes. “Imagine the potential energy surface as a landscape with mountains and valleys. Like a marble rolling over a miniature version of this landscape, the movement of atoms is determined by the peaks and valleys of the potential energy surface: this is called molecular dynamics,” explains Dr. Oliver Unke, researcher at Google Research in Berlin.

Unlike many other fields of application of machine learning, where there is a nearly limitless supply of data for AI, generally only very few quantum mechanical reference data are available to predict potential energy surfaces, data which are only obtained through tremendous computing power. “On the one hand, exact mathematical modelling of molecular dynamic properties can save the need for expensive and time-consuming lab experiments. On the other hand, however, it requires disproportionately high computing power. We hope that our novel deep learning algorithm – a so-called transformer model which takes a molecule’s charge and spin into consideration – will lead to new findings in chemistry, biology, and material science while requiring significantly less computing power,” says Klaus-Robert Müller.

the movement of atoms is determined by the peaks and valleys of The potential energy surface
Simplified two-dimensional depiction of the potential energy surface of the atoms C2H4O. The actual potential energy surface is 15-dimensional. Areas with low potential energy are depicted in blue; those with high potential energy in red. The black line depicts the reaction from ethanal (left) to ethenol (right).
(Copyright: Oliver Unke)

In order to achieve particularly high data efficiency, the researchers’ new deep learning model combines AI with known laws of physics. This allows certain aspects of the potential energy surface to be precisely described with simple physical formulas. Consequently, the new method learns only those parts of the potential energy surface for which no simple mathematical description is available, saving computing power. “This is extremely practical. AI only needs to learn what we ourselves do not yet know from physics,” explains Müller.

Spatial separation of cause and effect

Another special feature is that the algorithm can also describe nonlocal interactions. “Nonlocality” in this context means that a change to one atom, at a particular geometric position of the molecule, can affect atoms at a spatially separated geometric molecular position. Due to the spatial separation of cause and effect – something Albert Einstein referred to as “spooky action at a distance” – such properties of quantum systems are particularly hard for AI to learn. The researchers solved this issue using a transformer, a method originally developed for machine processing of language and texts or images. “The meaning of a word or sentence in a text frequently depends on the context. Relevant context-information may be located in a completely different section of the text. In a sense, language is also nonlocal,” explains Müller. With the help of such a transformer, the scientists can also differentiate between different electronic states of a molecule such as spin and charge. “This is relevant, for example, for physical processes in solar cells, in which a molecule absorbs light and is thereby placed in a different electronic state,” explains Oliver Unke.

The publication in detail:

Oliver T. Unke, Stefan Chmiela, Michael Gastegger, Kristof T. Schütt, Huziel E. Sauceda, Klaus-Robert Müller: SpookyNet: Learning force fields with electronic degrees of freedom and nonlocal effects. Nat. Commun. 12(7273) (2021)

Abstract

Machine-learned force fields combine the accuracy of ab initio methods with the efficiency of conventional force fields. However, current machine-learned force fields typically ignore electronic degrees of freedom, such as the total charge or spin state, and assume chemical locality, which is problematic when molecules have inconsistent electronic states, or when nonlocal effects play a significant role. This work introduces SpookyNet, a deep neural network for constructing machine-learned force fields with explicit treatment of electronic degrees of freedom and nonlocality, modeled via self-attention in a transformer architecture. Chemically meaningful inductive biases and analytical corrections built into the network architecture allow it to properly model physical limits. SpookyNet improves upon the current state-of-the-art (or achieves similar performance) on popular quantum chemistry data sets. Notably, it is able to generalize across chemical and conformational space and can leverage the learned chemical insights, e.g. by predicting unknown spin states, thus helping to close a further important remaining gap for today’s machine learning models in quantum chemistry.

Machine Learning Consultation

Home >

Machine Learning Consultation

Machine Learning Consultation

Machine learning (ML) and artificial intelligence (AI) have permeated the sciences and large parts of working life. Today many people use machine learning techniques without being a proven expert. Consequently, many questions and problems arise while using these techniques. The Berlin Institute for the Foundations of Learning and Data (BIFOLD) accommodates distinguished machine learning experts from different areas and offers a weekly consultation on machine learning for students, but also for companies and institutions.

BIFOLD offers a weekly ML consultation hour: Every Wednesday from 11:00 am – 12:00.
(Copyright: Unsplash)

While machine learning was only used by specialists a few years ago, such methods have now found application in various sciences, but also in companies: Doctors are supported in their decision making by ML models that analyze the content of tissue sections or laboratory data. Historians use ML to search for patterns and ways in which knowledge has spread around the world. In the engineering sciences, ML techniques are used, among other things, in process technology or control engineering; in chemistry, these methods support the modeling of chemical reactions. Social scientists, on the other hand, analyze the effects of applied machine learning methods on society.

In addition to students and research assistants who use such techniques in their theses or scientific research, problems and question around ML come up in small and medium-sized enterprises (SMEs) as well as other institutions.

Weekly consultation hours

How to translate a concrete application need into a well-posed data collection and machine learning workflow? What ML algorithm is most suitable for a given dataset? Why does an algorithm work well on current data but poorly on new data? How can you visualize or understand what a machine learning model has learned? For all questions around algorithms, deep learning, semantic speech recognition, image analysis or explainable artificial intelligence, BIFOLD offers a weekly ML consultation hour: Every Wednesday from 11:00 am – 12:00 pm, ML experts are available to support students with their specific problems in the field of ML.

Wednesdays 11:00am – 12:00 pm
Marchstr. 23, 10587 Berlin

Room MAR 4057

Companies or other institutions with questions concerning the application of machine learning methods can also get access to the scientific expertise for a fee. Please register for an appointment.

Email: coordination@bifold.berlin

Intelligent Machines Also Need Control

Home >

Intelligent Machines Also Need Control

Intelligent Machines Also Need Control

Dr. Marina Höhne, BIFOLD Junior Fellow, researches explainable artificial intelligence funded by the German Federal Ministry of Education and Research.

Happy to establish her own research group: Dr. Marina Höhne. (Copyright: Christian Kielmann)

For most people, the words mathematics, physics and programming in a single sentence would be reason enough to discreetly but swiftly change the subject. Not so for Dr. Marina Höhne, postdoctoral researcher at TU Berlin’s Machine Learning Group led by Professor Dr. Klaus-Robert Müller, as well as Junior Fellow at the Berlin Institute for the Foundations of Learning and Data (BIFOLD) and passionate mathematician. Since February 2020, the 34-year-old mother of a four-year-old son has been leading her own research group, Understandable Machine Intelligence (UMI Lab), funded by the Federal Ministry of Education and Research (BMBF).

In 2019, the BMBF published the call “Förderung von KI-Nachwuchswissenschaftlerinnen” which aims at increasing the number of qualified women in AI research in Germany and strengthening the influence of female researchers in this area long-term.
“The timing of the call was not ideal for me, as it came more or less right after one year of parental leave,” Höhne recalls. Nevertheless, she went ahead and submitted a detailed research proposal, which was approved. She was awarded two million euros funding over a period of four years, a sum comparable to a prestigious ERC Consolidator Grant. “For me, this came as an unexpected but wonderful opportunity to gain experience in organizing and leading research.”

A holistic understanding of AI models is needed

The topic of her research is explainable artificial intelligence (XAI). “My team focuses on different aspects of understanding AI models and their decisions. A good example of this is image recognition. Although it is now possible to identify the relevant areas in an image that contribute significantly to an AI system’s decision, i.e. whether the nose or the ear of a dog was influential in the model’s classification of the animal, there is still no single method that conclusively provides a holistic understanding of an AI model’s behavior. However, in order to be able to use AI models reliably in areas such as medicine or autonomous driving, where safety is important, we need transparent models. We need to know how the model behaves before we use it to minimize the risk of misbehavior,” says Marina Höhne outlining her research approach. Among other things, she and her research team developed explainable methods that use so-called Bayesian neural networks to obtain information about the uncertainties of decisions made by an AI system and then present this information in a way that is understandable for humans.

To achieve this, many different AI models are generated, each of which provides decisions based on slightly different parameterizations. All of these models are explained separately and subsequently pooled and displayed in a heat-map. Applied to image recognition, this means that the pixels of an image that contributed significantly to the decision of what it depicts, cat or dog, are strongly marked. The pixels that are only used by some models in reaching their decision, by contrast, are more faintly marked.

“Our findings could prove particularly useful in the area of diagnostics. For example, explanations with a high model certainty could help to identify tissue regions with the highest probability of cancer, speeding up diagnosis. Explanations with high model uncertainty, on the other hand, could be used for AI-based screening applications to reduce the risk of overlooking important information in a diagnostic process,” says Höhne.

Standup meeting: Each group member only has a few minutes to explain his or her scientific results.
(Copyright: Christian Kielmann)

Today, the team consists of three doctoral researchers and four student assistants. Marina Höhne, who in addition is associated professor at the University of Tromsø in Norway, explains that the hiring process of the team came with problems of a very particular nature: “My aim is to develop a diverse and heterogeneous team, partly to address the pronounced gender imbalance in machine learning. My job posting for the three PhD positions received twenty applications, all from men. At first, I was at a loss of what to do. Then I posted the jobs on Twitter to reach out to qualified women candidates. I’m still amazed at the response – around 70,000 people read this tweet and it was retweeted many times, so that in the end I had a diverse and qualified pool of applicants to choose from,” Höhne recalls. She finally appointed two women and one man. Höhne knows all about how difficult it can still be for women to combine career and family. At the time of her doctoral defense, she was nine-months pregnant and recalls: “I had been wrestling for some time with the decision to either take a break or complete my doctorate. In the end, I decided on the latter.” Her decision proved a good one as she completed her doctorate with “summa cum laude” while also increasing her awareness of the issue of gender parity in academia.

Understandable AI combined with exciting applications

Höhne already knew which path she wanted to pursue at the start of her master’s program in Technomathematics. “I was immediately won over by Klaus-Robert Müller’s lecture on machine learning,” she recalls. She began working in the group as a student assistant during her master’s program, making a seamless transition to her doctorate. “I did my doctorate through an industry cooperation with the Otto Bock company, working first in Vienna for two years and then at TU Berlin. One of the areas I focused on was developing an algorithm to make it easier for prosthesis users to adjust quickly and effectively to motion sequences after each new fitting,” says Höhne.  After the enriching experience of working directly with patients, she returned to more foundational research on machine learning at TU Berlin. “Understandable artificial intelligence, combined with exciting applications such as medical diagnostics and climate research – that is my passion. When I am seated in front of my programs and formulas, then it’s like I am in a tunnel – I don’t see or hear anything else.”

Marina Höhne has a passion for math. (Copyright: Christian Kielmann)
More information:

Dr. Marina Höhne
Understandable Machine Intelligence Lab (UMI)
E-Mail: marina.hoehne@tu-berlin.de

New Berlin Cell Hospital announced

Home >

New Berlin Cell Hospital announced

New Berlin Cell Hospital announced

When cells make the wrong decision, diseases ensue. This insight came from Berlin – namely from Rudolf Virchow. On October 13, 2021, at an event celebrating the 200th birthday of the famous pathologist, physician and socialist politician Rudolf Virchow, the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) and the Charité – Universitätsmedizin Berlin, together with several Berlin research institutions, declared the founding of the Berlin Cell Hospital, an Institution to shape and develop the cell-based medicine of the future.

From left to right: Prof. David Horst (Charité), Prof. Heike Graßmann (MDC), Dr. Stan Gorski (MDC),
Prof. Nikolaus Rajewsky (MDC), Prof. Christopher Baum (Charité), Prof. Otmar D. Wiestler (Helmholtz-Gemeinschaft),
Prof. Angelika Eggert (Charité), Prof. Heyo K. Kroemer (Charité), Michael Müller (Regierender Bürgermeister Berlin)
and Prof. Thomas Sommer (MDC)
(Copyright: Charité / Sabine Gudath)

The Berlin Cell Hospital will begin life as a registered association to allow established institutions to become members. It’s key participants are the MDC, the Helmholtz Association, Charité, the Berlin Institute of Health at Charité (BIH) and the Berlin Institute for the Foundations of Learning and Data (BIFOLD). The Cell Hospital also hopes to cooperate with private partners and other institutions in Berlin and Germany, including the Helmholtz Health Centers and the German Centers for Health Research (DZGs), as well as to create an international network.

The number of chronically ill people who require expensive and invasive treatments is growing continuously. At the same time, life expectancy is on the rise, which means the population is getting older and older. Instead of only treating common diseases when their patients start to display serious symptoms – by which time a great deal of damage has already been done – doctors are in urgent need of new diagnostic and therapeutic strategies.

Diseases often begin much earlier than the onset of symptoms. As far back as 1858, the famous pathologist Rudolf Virchow suggested that the origin of diseases can be traced back to changes in individual cells. So how and why do these changes occur?

Each cell is continuously “reading” the genome so it knows how to react to signals from neighboring cells or new environmental conditions. How exactly each individual cell interprets this “book of life,” but also what mistakes happen in the process and which changes disrupt the process, is something scientists have only been able to observe for the past few years thanks to single-cell biology. The volume of data generated for each cell corresponds in magnitude to that produced by classical genomics techniques. The amount of information it contains is unimaginable, and the depth of detail unparalleled. BIFOLD will contribute machine learning research and tools that make this flood of  data manageable.

The Berlin Cell Hospital brings together experts from clinical practice, biomedical research, technology, data science, mathematics and engineering science.
(Copyright: Unsplash)

“It’s as if we discovered a super microscope,” says Nikolaus Rajewsky. “Thanks to these technologies, we can analyze every single cell in a tissue for the very first time and understand when and why it gets sick.” Cell-based medicine wants to use this knowledge to guide cells back to a state of health as quickly as possible – with the help of extremely early diagnostics that recognize when a cell takes its first step in the direction of disease, with the help of targeted procedures on molecular mechanisms and with cellular therapies, RNA-based approaches and similar techniques. The goal of cell-based medicine is to close the gap between classic prevention and medicine that treats only symptomatic patients. Thanks to its personalized treatment strategies, the concept is also suitable for preventing disease relapses and resistance to immunotherapy or chemotherapy.

But successfully implementing cell-based medicine is no easy feat. It requires a multifaceted approach that breaks down disciplinary and institutional boundaries and that, to date, has never existed under one roof in Germany. To understand diseases in a new way, a research concept that brings together experts from clinical practice, biomedical research, technology, data science, mathematics and engineering science is needed – all working together in close proximity to advance novel approaches to medicine. The core pillars are single-cell technologies, patient-specific model systems such as organoids, and new AI solutions. In the new Cell Hospital, these will mainly be applied to the major chronic diseases (cancer, cardiovascular diseases, infectious diseases and neurological diseases).

The Cell Hospital aims to develop molecular prevention strategies and new precision diagnostics, as well as to reliably identify new drug targets for molecular and cellular therapies. In order to transfer knowledge as quickly as possible, the Berlin Cell Hospital is planning a broad innovation and industry program – for example via the Virchow 2.0 Clusters4Future application – which should facilitate dynamic developments and remove any existing obstacles. The resulting innovation ecosystem will hopefully include industry partnerships, cross-sector networking, innovation spaces and labs, and should promote a thriving spin-off culture. An education and training program will target the workforce of the future, including students, researchers and health professionals. The Berlin Cell Hospital aims to unite these basic elements and the critical mass of science under one roof – in close proximity to the clinic and patients, and in a way that involves patients and citizens from the very beginning.

Best Poster Award for Kim Nicoli

Home >

Best Poster Award for Kim Nicoli

Best Poster Award for Kim Nicoli

During the Summer School on Machine Learning for Quantum Physics and Chemistry, in September 2021 in Warsaw, BIFOLD PhD candidate Kim. A. Nicoli was awarded with the Best Poster Award. His poster was democratically selected by the participants and the scientific committee for being the best amongst more than 80 participants. The corresponding paper “Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models” is a joint international effort of several BIFOLD researchers: Kim Nicoli, Christopher Anders, Pan Kessel, Shinichi Nakajima, as well as a group of researchers affiliated with DESY (Zeuthen) and other institutions. The work is published in Physics Review Letters.

Kim A. Nicoli
(Copyright: Kim Nicoli)

“Modeling and understanding the interactions of quarks, fundamental subatomic, yet indivisible particles, which represent the smallest known units of matter, is the main goal of current ongoing research in the field of High Energy Physics. Deepening our understanding of such phenomena, leveraging on modern machine learning techniques, would have some important implications in many related fields of applied science and research, such as quantum computer devices, drug discoveries and many more.”

The summer school on Machine Learning for Quantum Physics and Chemistry was co-organized by the University of Warzaw and the Institute for Photonics Sciences, Barcelona.

More information: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.032001

Preventing Image-Scaling Attacks on Machine Learning

Home >

Preventing Image-Scaling Attacks on Machine Learning

Preventing Image-Scaling Attacks on Machine Learning

BIFOLD Fellow Prof. Dr. Konrad Rieck, head of the Institute of System Security at TU Braunschweig, and his colleagues provide the first comprehensive analysis of image-scaling attacks on machine learning, including a root-cause analysis and effective defenses. Konrad Rieck and his team could show that attacks on scaling algorithms like those used in pre-processing for machine learning (ML) can manipulate images unnoticeably, change their content after downscaling and create unexpected and arbitrary image outputs. “These attacks are a considerable threat, because scaling as a pre-processing step is omnipresent in computer vision,” knows Konrad Rieck. The work was presented at the USENIX Security Symposium 2020.

At first glance, no manipulation is visible in the input image. However, after scaling down, the output is past all recognition.

Machine learning is a rapidly advancing field. Complex ML methods do not only enable increasingly powerful tools, they are also entry gates for new forms of attacks. Research into security for ML usually focusses on the learning algorithms itself, although the first step of a ML process is the pre-processing of data. In addition to various cleaning and organizing operations in datasets, images are scaled down during pre-processing to speed up the actual learning process that follows. Konrad Rieck and his team could show that frequently used scaling algorithms are vulnerable to attacks. It is possible to manipulate input images in such a way that they are indistinguishable from the original to the human eye, but will look completely different after downscaling.

An attacker manipulates a source image of a “Do not enter” sign (S) with the goal to create the target image (T), a “No parking” sign.  After a scaling algorithm was applied to the manipulated attack image (A) during pre-processing, the ML model receives the “No parking” sign (D) for training. While the user thought his model would be trained with a “Do not enter” sign, it will be trained with a “No parking” sign instead.

The vulnerability is rooted in the scaling process: Most scaling algorithms only consider a few high-weighed pixels of an image and ignore the rest. Therefore, only these pixels need to be manipulated to achieve drastic changes in the downscaled image. Most pixels of the input picture remain untouched – making the changes invisible to the human eye. In general, scaling attacks are possible wherever downscaling takes place without low-pass filtering – even in video and audio media formats. These attacks are model-independent and thus do not depend on knowledge of the learning model, features or training data.

Prof. Dr. Konrad Rieck
(Copyright: Konrad Rieck)

“Image-scaling attacks can become a real threat in security related ML applications. Imagine manipulated images of traffic signs being introduced into the learning process of an autonomous driving system! In BIFOLD we develop methods for the effective detection and prevention of modern attacks like these.”

“Attackers don’t need to know the ML training model and can even succeed with image-scaling attacks in otherwise robust neural networks,” says Konrad Rieck. “Based on our analysis, we were able to identify a few algorithms that withstand image scaling attacks and introduce a method to reconstruct attacked images.”

Further information is available at https://scaling-attacks.net/.

The publication in detail:

Erwin Quiring, David Klein, Daniel Arp, Martin Johns, Konrad Rieck: Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning. USENIX Security Symposium 2020: 1363-1380

Abstract

Machine learning has made remarkable progress in the last years, yet its success has been overshadowed by different attacks that can thwart its correct operation. While a large body of research has studied attacks against learning algorithms, vulnerabilities in the preprocessing for machine learning have received little attention so far. An exception is the recent work of Xiao et al. that proposes attacks against image scaling. In contrast to prior work, these attacks are agnostic to the learning algorithm and thus impact the majority of learning-based approaches in computer vision. The mechanisms underlying the attacks, however, are not understood yet, and hence their root cause remains unknown.

In this paper, we provide the first in-depth analysis of image-scaling attacks. We theoretically analyze the attacks from the perspective of signal processing and identify their root cause as the interplay of downsampling and convolution. Based on this finding, we investigate three popular imaging libraries for machine learning (OpenCV, TensorFlow, and Pillow) and confirm the presence of this interplay in different scaling algorithms. As a remedy, we develop a novel defense against image-scaling attacks that prevents all possible attack variants. We empirically demonstrate the efficacy of this defense against non-adaptive and adaptive adversaries.

In the media:
More information is available from:

Prof. Dr. Konrad Rieck

Technische Universität Braunschweig
Institute of System Security
Rebenring 56
D-38106 Braunschweig

Email: k.rieck@tu-bs.de