Home > News >

BIFOLD PI Dr. Samek talks about Explainable AI at NeurIPS 2020 Social Event

BIFOLD PI Dr. Samek explained Trustworthy AI at NeurIPS 2020 Social Event

BIFOLD Principal Investigator Dr. Wojciech Samek (Fraunhofer HHI) talked about explainable and trustworthy AI at the “Decemberfest on Trustworthy AI Research” as part of the annual Conference on Neural Information Processing Systems (NeurIPS 2020). NeurIPS is a leading international conference on neural information processing systems, Machine Learning (ML) and their biological, technological, mathematical, and theoretical aspects.

Dr. Wojciech Samek

Dr. Wojciech Samek talked about the relation between explainability and trustworthiness of ML models. In particular, he showed that Layer-wise Relevance Propagation (LRP) – a state-of-the-art explainable AI (XAI) techniques developed by three BIFOLD PIs – can help to foster trust in ML models by detecting biases and uncovering potentially dangerous Clever Hans type behaviours.

Once detected, these biases and misbehaviours can be pointedly removed, which will result in more reliable and trustworthy models. In a recent unifying review on deep and shallow anomaly detection, the BIFOLD researchers and Prof. Thomas Dietterich (Oregon State University), the second speaker of the session, show that this idea of using explainability to gain trust can be transferred beyond supervised classification setttings.

The Decemberfest at NeurIPS 2020 also brought together researchers from institutions within the German Network of National Centres of Excellence for AI Research. Dr. Samek’s talk was followed by presentations by reserachers from DFKI, MCML, ML2R and TueAI Center and their respective international research partners.
Through the participation of Emmanuel Vincent (Deputy Head of Science of the Nancy – Grand Est research centre of Inria) and Joaquin Vanschoren (Eindhoven  University  of  Technology, founding member of European AI networks CLAIRE and ELLIS), Betram Braunschweig (Inria and formerly President of the French Association for Artificial Intelligence), Philipp Slusallek (DFKI, founding member of CLAIRE)  Matthias Bethge (TueAI Center, co-initiator ELLIS) and Stefan Wrobel (ML2R, senior expert of the EU High‐Level Expert Group on Artificial Intelligence) the panel also represented perspectives of European cooperation in XAI research.