Beyond Explainable AI

Home >

Beyond Explainable AI

Wojciech Samek and Klaus-Robert Mueller published new book on Explainable AI

To tap the full potential of artificial intelligence, not only do we need to understand the decisions it makes, these insights must also be made applicable. This is the aim of the new book “xxAI – Beyond Explainable AI”, edited by Wojciech Samek, head of the Artificial Intelligence department at the Fraunhofer Heinrich Hertz Institute (HHI) and BIFOLD researcher and Klaus-Robert Mueller, professor of machine learning at the Technical University of Berlin (TUB) and co-director at BIFOLD. The publication is based on a workshop held during the International Conference on Machine Learning in 2020. Co-editors also include AI experts Andreas Holzinger, Randy Goebel, Ruth Fong and Taesep Moon. It is already the second publication by Samek and Mueller.

Following the great resonance of the editors’ first book, “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning” (2019), which presented an overview of methods and applications of Explainable AI (XAI) and racked up over 300,000 downloads worldwide, their new publication goes a step further. It provides an overview of current trends and developments in the field of XAI. In one chapter, for example, Samek and Mueller’s team shows that XAI concepts and methods developed for explaining classification problems can also be applied to other types of problems. When solving classification problems, the target variables sought are categorical, such as “What color is the traffic light right now, red, yellow, or green?”. XAI techniques for solving these problems can help explain problems in unsupervised learning, reinforcement learning, or generative models. Thus, the authors expand the horizons of previous XAI research and provide researchers and developers with a set of new tools that can be used to explain a whole new range of problem types and models.

The book is available free of charge.
C: Fraunhofer HHI

As the title “Beyond Explainable AI” suggests, the book also highlights solutions regarding the practical application of insights from methodological aspects to make models more robust and efficient. While previous research has focused on the process from AI as a “black box” to explaining its decisions, several chapters in the new book address the next step, toward an improved AI model. Furthermore, other authors reflect on their research not only in their own field of work, but also in the context of society as a whole. They cover a variety of areas that go far beyond classical XAI research. For example, they address the relationships between explainability and fairness, explainability and causality, and legal aspects of explainability.

The book is available free of charge here.

New professorship for Machine Learning and Communications

Home >

New professorship for Machine Learning and Communications

“I want to move beyond purely ‘explaining’ AI”

BIFOLD researcher Dr. Wojciech Samek has been appointed Professor of Machine Learning and Communications at TU Berlin with effect from 1 May 2022. Professor Samek heads the Department of Artificial Intelligence at the Fraunhofer Heinrich-Hertz-Institute.

Prof. Wojciech Samek receives his certificate of appointment from the President of TU Berlin, Prof. Geraldine Rauch.
C: private


Professor Samek – very warm congratulations on your appointment at TU Berlin. You have been involved in BIFOLD – Berlin’s AI competence center – since 2014. What connects you to BIFOLD?

“I have supported BIFOLD and its predecessor projects to the best of my abilities since the very beginning. My cooperation with BIFOLD is very important to me – as is strengthening Berlin’s role as a center of artificial intelligence (AI). I have worked very closely and successfully with BIFOLD Co-Director Professor Klaus-Robert Müller on the explainability of artificial intelligence. I have also enjoyed a successful cooperation with Professor Thomas Wiegand working at the interface between machine learning (ML) and compression. In addition to the 12 patent registrations resulting from these and other collaborative undertakings with researchers at TU Berlin, my collaboration with Thomas Wiegand has had a considerable influence on the new international MPEG-7 NNR standard on the compression of neural networks.”

What areas would you like to focus on in your future research at TU Berlin/BIFOLD?

“My goal is to further develop three areas: explainability and trustworthiness of artificial intelligence, the compression of neural networks, and so-called federated leaning. I aim to focus on the practical, methodological, and theoretical aspects of machine learning at the interface to other areas of application. The combination of machine learning and communications is something I find particularly interesting. In my research, I use explainability to improve ML models and to make them more robust. So, I am looking to move beyond purely “explaining.” My goal is to increase reliability and ultimately develop explanations based on checking mechanisms for ML models.”

How does this research complement the research work of BIFOLD and TU Berlin?

“These research areas are of great importance not only for BIFOLD with its focus on ’responsible AI’ and ‘communications’ as an area of application; they also offer a whole range of possibilities for collaboration with researchers at Faculty IV. Klaus-Robert Müller, Grégoire Montavon, and I are planning to transfer the explainability concepts we developed to other types of ML models and areas such as regression, segmentation, and clustering. Enhanced visualizations of explanations will enable us to develop more reliable and more explainable ML models for communication applications. My research will also focus on the development of enhanced compression methods for neural networks and federated learning. Examining the interaction between learning, compression, and communications is of great interest here. Overall, we can say that my research will strengthen the research profile of Faculty IV while at the same time offering very many new possibilities for cooperation at the interface between ML, information theory, signal compression, and communications.”

Do you already have an idea of what you would like to implement in teaching?

“I am really looking forward to the new challenges that teaching will bring. Within BIFOLD, I am planning a special course titled “Advanced Course on Machine Learning & Communications.” This would focus on federated learning, methods for the compression of neural networks, efficient neural architectures, and ML methods for communication applications. The lectures could also be supplemented by a lab. What I envisage is the implementation of federated learning on many networked devices. This would provide hands-on opportunities for students to learn directly about topics such as resource-saving ML, compression of neural networks, and federated learning. I would also be very interested in giving a course on explainable and trustworthy AI. A BIFOLD special course, a master’s/bachelor’s seminar, or a supplementary module to the bachelor’s course on cognitive algorithms would all provide suitable formats for this.”

Summer School Information Event

Home >

Summer School Information Event

Summer School Information Event

Date and Time: Monday, 25. April, 2022; 4:00 pm

Speaker: Andrea Hamm, Martin Schuessler, Dr Stefan Ullrich

Venue: virtual event

Participation: If you are interested in participating please contact: gs@bifold.berlin

Andrea Hamm and Martin Schuessler, supported by Dr Stefan Ullrich, will present the program of the BIFOLD Ethics in Residence’s Summer School, which will take place from 20.-24. June in a hotel around Berlin and at the Weizenbaum Institute. The Summer School complements and serves the technological research on artificial intelligence (AI) within the AI Competence Centres with aspects of ethics, explainability, and sustainability. It is organized within the Ethics in Residence program, which is realized between the Weizenbaum Institute for the Networked Society – the German Internet Institute – and the Network of the German AI Competence Centres.

The Summer School is fully funded by BIFOLD and open to all BIFOLD PhD students, and in addition those from the other centres of the German AI Competence Center Network (ML2R, MCML, TUE-AI, ScaDS, DFKI).


The program includes multiple hands-on workshops to advance the participants’ individual research projects, several high-profile international guest lectures and Q&A sessions with the guest speakers, a panel discussion, and participants’ presentation sessions for expert jury feedback. The international expert researchers and guest speakers joining have backgrounds in computing within limits, disaster research, and COVID-19 data research. In addition, the summer school offers two main tracks, one on explainable deep neural networks (XNN) and one on sustainable AI (SAI), for a more specialized training of the PhD students.

About the presenters & track leaders
Andrea Hamm
Copyright: private

Andrea Hamm is a doctoral researcher at the Weizenbaum Institute for the Networked Society and TU Berlin. In her dissertation, she investigates civic technologies for environmental monitoring in the context of making cities and communities more sustainable. She is particularly interested in the real-world effects of AI technologies, for instance to understand how AI-supported simulations contribute to reducing CO2 footprints and material consumption. She has published at international venues such as at ACM CHI Conference on Human Factors in Computing Systems, ACM ICT for Sustainability Conference, and the International Conference on Wirtschaftsinformatik WI. Her work focuses on the interdisciplinary transition from human-computer interaction (HCI) and design studies to communication studies. She is a member of the German-Japanese Society for Social Sciences and the AI Climate Change Center Berlin-Brandenburg. In 2019, she was a guest researcher at Aarhus University, Denmark, after previously studying at Freie Universität Berlin (Germany), Vrije Universiteit Brussel (Belgium), and Université Catholique de Lille (France).

Martin Schuessler is a tech-savvy, interdisciplinary human-computer interaction researcher with the belief that it is the purpose of technology to enhance people’s lives. As an undergrad, he followed this belief from a technical perspective by investigating the usability of new interaction paradigms such as tangible and spatial interfaces at the OvGU Magdeburg and Telekom Innovation Labs Berlin. As a PhD student at the interdisciplinary Weizenbaum Institute he adopted a broader often less technical perspective on the same belief (still involving a lot of programming). His dissertation work looks at ways to make computer vision systems more intelligible for users. Martin has been a visiting researcher at the University College of London Interaction Center and the Heidelberg Collaboratory for Image Processing. He has published articles at top international conference on Learning Representations (ICLR), Human Factors in Computing Systems (CHI), Computer-Supported Cooperative Work and Social Computing (CSCW) and Intelligent User Interfaces (IUI).

Martin Schüssler
Copyright: private

The Summer School as a whole and as a part of the BIFOLD Ethics in Residence Program is fostered by Dr. Stefan Ullrich.

BIFOLD Summer School

Home >

BIFOLD Summer School

Ethics in Machine Learning & Data Management

The BIFOLD summer school will take place from 20-24 June 2022 at the Weizenbaum Institute for the Networked Society (near Zoo station). It will focus on the latest ethical considerations in machine learning and data management by offering lectures and workshops on two main tracks. The school is designed for doctoral students of the BMBF‘s network of AI competence centres and organized by the BIFOLD Graduate School in collaboration with the Ethics in Residence Program with researchers of the Weizenbaum Institute for the Networked Society – the German Internet Institute.

Sufficient time for hands-on workshops and individual feedback is included.
C: StockSnap/pixabay

The summer school complements technological research on artificial intelligence (AI) within the AI competence centres with ethical aspects of explainability and sustainability. It is part of the Ethics in Residence program. The program includes multiple hands-on workshops to advance individual research projects, several guest lectures including Q&A, a panel discussion, and Ph.D. student presentation sessions with expert jury feedback. The summer school offers two tracks on explainable deep neural networks (XNN) and sustainable AI (SAI) for more specialized training of the doctoral students. All of BIFOLD’s PhD students are invited to participate. In addition BIFOLD offers places for the PhD students of the German AI competence centre network (ML2R, MCML, TUE-AI, ScaDS, DFKI).
Summer School Website

Invited international experts

International expert researchers with backgrounds in computing within limits, disaster research, and COVID-19 data research are joining the summer school as speakers and are reachable for individual feedback.

Daniel Pargman, Ph.D.
KTH Royal Institute of Technology, Stockholm, Sweden

Teresa Cerratto Pargman, Ph.D.
Stockholm University, Sweden

Yuya Shibuya, Ph.D.
The University of Tokyo, Japan

Raphael Sonabend, Ph.D.
Imperial College London, UK & University of Kaiserslautern, Germany

Rainer Mühlhoff, Ph.D.
Osnabrück University, Germany

Enrico Costanza, Ph.D.
University College London, U

Focus tracks

Track XNN focuses on evaluating interpretable machine learning to provide students with the ability to empirically validate claims about interpretability:

  • Critical review of XAI methods: taxonomies of XAI approaches, review of explanation goals, user benefits and current results from user studies
  • Rigorous methods for validating explanation methods with users: interdisciplinary methodological training, suitable evaluation datasets, user tasks and study designs, participant recruitment, validity, and reproducibility considerations

Track SAI emphasizes on ecological and socio-political aspects of AI to understand how AI and data can contribute to action in the name of sustainability transition:

  • Sustainability in research and policies: What is sustainability? United Nations SDGs, COVID-19, critical thinking on AI, environmental monitoring, sustainable smart cities and communities
  • SAI approaches and methods: data feminism, digital civics, computing within limits, citizen science, social media data, and mixed methods.

The complete program can be found here.

Organisational details:

Participants are expected to attend the entire program, arrival from 19. June, departure 25. June 2022. There is no tuition fee.
Please get in touch with us in case you need child care.

Program venue: Weizenbaum Institute for the Networked Society

Accomodation: Hampton by Hilton Berlin City West, in walking distance to Weizenbaum & TU Berlin, in the heart of Berlin, with the nearby Tiergarten Park providing plenty of greenery  

Application/Registration

Please send one pdf-file, including your CV, an abstract of your (preliminary) Ph.D. project, and a short motivation message describing why you would like to participate and what you would like to learn during the summer school, to gsapplication@bifold.tu-berlin.de.
Application deadline is 30. April 2022.

Organisers

Andrea Hamm
Doctoral Researcher, Research Group “Responsibility and the Internet of Things”, Weizenbaum Institute for the Networked Society & TU Berlin, Germany

Martin Schuessler
Doctoral Researcher, Research Group “Criticality of AI-based Systems”, Weizenbaum Institute for the Networked Society & TU Berlin, Germany

Dr. Stefan Ullrich
Weizenbaum Institute for the Networked Society, Berlin, Germany

Prof. Dr. Volker Markl
Co-Director BIFOLD

Prof. Dr. Klaus-Robert Müller
Co-Director BIFOLD

Dr. Tina Schwabe, Dr. Manon Grube
Coordinators of the BIFOLD Graduate School


Intelligent Machines Also Need Control

Home >

Intelligent Machines Also Need Control

Intelligent Machines Also Need Control

Dr. Marina Höhne, BIFOLD Junior Fellow, researches explainable artificial intelligence funded by the German Federal Ministry of Education and Research.

Happy to establish her own research group: Dr. Marina Höhne. (Copyright: Christian Kielmann)

For most people, the words mathematics, physics and programming in a single sentence would be reason enough to discreetly but swiftly change the subject. Not so for Dr. Marina Höhne, postdoctoral researcher at TU Berlin’s Machine Learning Group led by Professor Dr. Klaus-Robert Müller, as well as Junior Fellow at the Berlin Institute for the Foundations of Learning and Data (BIFOLD) and passionate mathematician. Since February 2020, the 34-year-old mother of a four-year-old son has been leading her own research group, Understandable Machine Intelligence (UMI Lab), funded by the Federal Ministry of Education and Research (BMBF).

In 2019, the BMBF published the call “Förderung von KI-Nachwuchswissenschaftlerinnen” which aims at increasing the number of qualified women in AI research in Germany and strengthening the influence of female researchers in this area long-term.
“The timing of the call was not ideal for me, as it came more or less right after one year of parental leave,” Höhne recalls. Nevertheless, she went ahead and submitted a detailed research proposal, which was approved. She was awarded two million euros funding over a period of four years, a sum comparable to a prestigious ERC Consolidator Grant. “For me, this came as an unexpected but wonderful opportunity to gain experience in organizing and leading research.”

A holistic understanding of AI models is needed

The topic of her research is explainable artificial intelligence (XAI). “My team focuses on different aspects of understanding AI models and their decisions. A good example of this is image recognition. Although it is now possible to identify the relevant areas in an image that contribute significantly to an AI system’s decision, i.e. whether the nose or the ear of a dog was influential in the model’s classification of the animal, there is still no single method that conclusively provides a holistic understanding of an AI model’s behavior. However, in order to be able to use AI models reliably in areas such as medicine or autonomous driving, where safety is important, we need transparent models. We need to know how the model behaves before we use it to minimize the risk of misbehavior,” says Marina Höhne outlining her research approach. Among other things, she and her research team developed explainable methods that use so-called Bayesian neural networks to obtain information about the uncertainties of decisions made by an AI system and then present this information in a way that is understandable for humans.

To achieve this, many different AI models are generated, each of which provides decisions based on slightly different parameterizations. All of these models are explained separately and subsequently pooled and displayed in a heat-map. Applied to image recognition, this means that the pixels of an image that contributed significantly to the decision of what it depicts, cat or dog, are strongly marked. The pixels that are only used by some models in reaching their decision, by contrast, are more faintly marked.

“Our findings could prove particularly useful in the area of diagnostics. For example, explanations with a high model certainty could help to identify tissue regions with the highest probability of cancer, speeding up diagnosis. Explanations with high model uncertainty, on the other hand, could be used for AI-based screening applications to reduce the risk of overlooking important information in a diagnostic process,” says Höhne.

Standup meeting: Each group member only has a few minutes to explain his or her scientific results.
(Copyright: Christian Kielmann)

Today, the team consists of three doctoral researchers and four student assistants. Marina Höhne, who in addition is associated professor at the University of Tromsø in Norway, explains that the hiring process of the team came with problems of a very particular nature: “My aim is to develop a diverse and heterogeneous team, partly to address the pronounced gender imbalance in machine learning. My job posting for the three PhD positions received twenty applications, all from men. At first, I was at a loss of what to do. Then I posted the jobs on Twitter to reach out to qualified women candidates. I’m still amazed at the response – around 70,000 people read this tweet and it was retweeted many times, so that in the end I had a diverse and qualified pool of applicants to choose from,” Höhne recalls. She finally appointed two women and one man. Höhne knows all about how difficult it can still be for women to combine career and family. At the time of her doctoral defense, she was nine-months pregnant and recalls: “I had been wrestling for some time with the decision to either take a break or complete my doctorate. In the end, I decided on the latter.” Her decision proved a good one as she completed her doctorate with “summa cum laude” while also increasing her awareness of the issue of gender parity in academia.

Understandable AI combined with exciting applications

Höhne already knew which path she wanted to pursue at the start of her master’s program in Technomathematics. “I was immediately won over by Klaus-Robert Müller’s lecture on machine learning,” she recalls. She began working in the group as a student assistant during her master’s program, making a seamless transition to her doctorate. “I did my doctorate through an industry cooperation with the Otto Bock company, working first in Vienna for two years and then at TU Berlin. One of the areas I focused on was developing an algorithm to make it easier for prosthesis users to adjust quickly and effectively to motion sequences after each new fitting,” says Höhne.  After the enriching experience of working directly with patients, she returned to more foundational research on machine learning at TU Berlin. “Understandable artificial intelligence, combined with exciting applications such as medical diagnostics and climate research – that is my passion. When I am seated in front of my programs and formulas, then it’s like I am in a tunnel – I don’t see or hear anything else.”

Marina Höhne has a passion for math. (Copyright: Christian Kielmann)
More information:

Dr. Marina Höhne
Understandable Machine Intelligence Lab (UMI)
E-Mail: marina.hoehne@tu-berlin.de

KI in der Medizin muss erklärbar sein

Home >

KI in der Medizin muss erklärbar sein

KI in der Medizin muss erklärbar sein

Analyse-System für die Brustkrebsdiagnose entwickelt

Wissenschaftler*innen der TU Berlin und der Charité – Universitätsmedizin Berlin sowie der Universität Oslo haben ein neues Analyse-System für die Brustkrebsdiagnostik anhand von Gewebeschnitten entwickelt, das auf Künstlicher Intelligenz (KI) beruht. Zwei Weiterentwicklungen machen das System einzigartig: Zum einen integriert es erstmals morphologische, molekulare und histologische Daten in einer Auswertung. Zum zweiten liefert es eine Erklärung des KI-Entscheidungsprozesses in Form von sogenannten Heatmaps mit. Diese Heatmaps zeigen Pixel für Pixel welche Bildinformation wie stark zu dem KI-Entscheidungsprozess beigetragen hat. Dadurch können die Mediziner*innen das Ergebnis der KI-Analyse nachvollziehen und auf Plausibilität prüfen. Künstliche Intelligenz wird damit erklärbar – ein entscheidender und unabdingbarer Schritt nach vorn, will man KI-Systeme künftig im Klinik-Alltag zur Unterstützung der Medizin einsetzen. Die Forschungsergebnisse wurden jetzt in Nature Machine Intelligence veröffentlicht.

Krebsmedizin beschäftigt sich zunehmend mit der molekularen Charakterisierung von Tumorgewebeproben. Ermittelt wird dabei unter anderem ob und/oder wie die DNA in dem Tumorgewebe sich verändert hat, die Genexpression oder auch die Protein-Expression in den Gewebeproben. Gleichzeitig setzt sich die Erkenntnis durch, dass die Krebsprogression eng mit der interzellulären Verbindung und der Interaktion der Krebszellen mit dem umgebenden Gewebe – einschließlich des Immunsystems – zusammenhängt.

Bilddaten liefern hohe räumliche Auflösung

Während mikroskopische Techniken die Untersuchung biologischer Prozesse mit hoher räumlicher Auflösung erlauben, können molekulare Marker mikroskopisch nur begrenzt erhoben werden. Sie werden vielmehr anhand von aus Gewebeproben extrahierter Proteine oder DNA ermittelt. Als Folge erlauben sie meist keine räumliche Auflösung und daher ist ihr Zusammenhang mit den mikroskopischen Strukturen typischerweise unklar. „Bei Brustkrebs ist bekannt, dass die Zahl eingewanderter Immunzellen, der sogenannten Lymphozyten, im Tumorgewebe einen Einfluss auf die Prognose der Patientin hat. Zusätzlich wird diskutiert, ob diese Zahl auch einen prädiktiven Wert hat – also Aussagen darüber ermöglicht, wie gut welche Therapie anschlägt“, so Prof. Dr. Frederick Klauschen vom Institut für Pathologie an der Charité.

„Das Problem: Wir haben gute und belastbare molekulare Daten und gute, räumlich hochaufgelöste histologische Daten. Aber es fehlte bislang die entscheidende Brücke zwischen den Bildgebungsdaten und den hochdimensionalen molekularen Daten“, ergänzt Prof. Dr. Klaus-Robert Müller, Professor für Maschinelles Lernen an der TU Berlin. Die beiden Wissenschaftler kooperieren bereits seit mehreren Jahren unter dem Dach des nationalen KI-Kompetenzzentrums Berlin Institute for the Foundations of Learning and Data (BIFOLD), das an der TU Berlin beheimatet ist.

Verbindungsstück zwischen molekularen und histologischen Daten fehlte

Bestimmung von Tumor-infiltirerenden Lymphozyten (TILs) durch erklärbare KI-Technologie. Histologische Präparierung eines Brustkrebskarzinoms.
(© Frederick Klauschen)

Das Ergebnis des KI-Verfahrens zeigt eine sogenannte Heatmap, die die TiLs in rot markiert. Sonstiges Gewebe/Zellen: blau und grün.
(© Frederick Klauschen)

In dem jetzt veröffentlichten Ansatz gelang genau diese Symbiose. „Unser System ermöglicht die robuste Erkennung von pathologischen Veränderungen in mikroskopischen Bildern. Parallel dazu liefern wir eine präzise Heatmap-Visualisierung, die zeigt, welcher Pixel auf dem mikroskopischen Bild, in welchem Maße zu der Diagnose des Algorithmus beigetragen hat“, erläutert Klaus-Robert Müller. Zusätzlich haben die Wissenschaftler*innen das Verfahren noch einen großen Schritt weiterentwickelt: „Unser Analyse-System wurde mit Hilfe von maschinellen Lernverfahren so trainiert, dass es auch verschiedene molekulare Merkmale, wie zum Beispiel den Zustand der DNA, die Genexpression, oder auch die Protein-Expression in bestimmten Bereichen des Gewebes aus den histologischen Bildern vorhersagen kann.“

Als nächstes stehen die Zertifizierung und weitere klinische Validierungen – inklusive Tests in der Tumor-Routinediagnostik – auf der Agenda. Doch Frederick Klauschen ist überzeugt: „Die von uns entwickelte Methode erlaubt es in Zukunft, die histopathologische Tumordiagnostik präziser, standardisierter und damit auch qualitativ besser zu machen.“

Publikation:

Morphological and molecular breast cancer profiling through explainable machine learning, Nature Machine Intelligence

Weitere Informationen erhalten Sie von:

Prof. Dr. Klaus-Robert Müller
TU Berlin
Maschinelles Lernen
Tel.: 030 314 78621
E-Mail: klaus-robert.mueller@tu-berlin.de

Prof. Dr. Frederick Klauschen
Charité – Universitätsmedizin Berlin
Institut für Pathologie
Tel.: 030 450 536 053
E-Mail: frederick.klauschen@charite.de

BIFOLD Fellow Dr. Wojciech Samek heads newly established AI research department at Fraunhofer HHI

Home >

BIFOLD Fellow Dr. Wojciech Samek heads newly established AI research department at Fraunhofer HHI

BIFOLD Fellow Dr. Wojciech Samek heads newly established AI research department at Fraunhofer HHI

Dr. Samek (l.) and Prof. Müller in front of an XAI demonstrator at Fraunhofer HHI. (Copyright: TU Berlin/Christian Kielmann)

The Fraunhofer Heinrich Hertz Institute (HHI) has established a new research department dedicated to “Artificial Intelligence”. The AI expert and BIFOLD Fellow Dr. Wojciech Samek, previously leading the research group “Machine Learning” at Fraunhofer HHI, will head the new department. With this move Fraunhofer HHI aims at expanding the transfer of its AI research on topics such as Explainable AI and neural network compression to the industry.

Dr. Wojciech Samek: “The mission of our newly founded department is to make today’s AI truly trustable and in all aspects practicable. To achieve this, we will very closely collaborate with BIFOLD in order to overcome the limitations of current deep learning models regarding explainability, reliability and efficiency.“

“Congratulations, I look forward to a continued successful teamwork with BIFOLD fellow Wojciech Samek, who is a true AI hot shot.”

BIFOLD Director Prof. Dr. Klaus-Robert Müller

The new department further strengthens the already existing close connection between basic AI research at BIFOLD and applied research at Fraunhofer HHI and is a valuable addition to the dynamic AI ecosystem in Berlin.

“The large Berlin innovation network centered around BIFOLD is unique in Germany. This ensures that the latest research results will find their way into business, science and society.”

BIFOLD Director Prof. Dr. Volker Markl