Best Poster Award for Kim Nicoli

Home >

Best Poster Award for Kim Nicoli

Best Poster Award for Kim Nicoli

During the Summer School on Machine Learning for Quantum Physics and Chemistry, in September 2021 in Warsaw, BIFOLD PhD candidate Kim. A. Nicoli was awarded with the Best Poster Award. His poster was democratically selected by the participants and the scientific committee for being the best amongst more than 80 participants. The corresponding paper “Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models” is a joint international effort of several BIFOLD researchers: Kim Nicoli, Christopher Anders, Pan Kessel, Shinichi Nakajima, as well as a group of researchers affiliated with DESY (Zeuthen) and other institutions. The work is published in Physics Review Letters.

Kim A. Nicoli
(Copyright: Kim Nicoli)

“Modeling and understanding the interactions of quarks, fundamental subatomic, yet indivisible particles, which represent the smallest known units of matter, is the main goal of current ongoing research in the field of High Energy Physics. Deepening our understanding of such phenomena, leveraging on modern machine learning techniques, would have some important implications in many related fields of applied science and research, such as quantum computer devices, drug discoveries and many more.”

The summer school on Machine Learning for Quantum Physics and Chemistry was co-organized by the University of Warzaw and the Institute for Photonics Sciences, Barcelona.

More information: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.032001

Making the use of AI systems safe

Home >

Making the use of AI systems safe

Making the use of AI systems safe

BIFOLD Fellow Dr. Wojciech Samek and Luis Oala (Fraunhofer Heinrich Hertz Institute) together with Jan Macdonald and Maximilian März (TU Berlin) were honored with the award for “best scientific contribution” at this year’s medical imaging conference BVM. Their paper “Interval Neural Networks as Instability Detectors for Image Reconstructions” demonstrates how uncertainty quantification can be used to detect errors in deep learning models.

The award winners were announced during the virtual BVM (Bildverarbeitung für die Medizin) conference on March 9, 2021. The award for “best scientific contribution” is granted each year by the BVM Award Committee. It honors innovative research with a methodological focus on medical image processing in a medically relevant application context.

The interdisciplinary group of researchers investigated the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Limits in the understanding of an AI system’s behavior create risks for system failure. Hence, the identification of failure modes in AI systems is an important pre-requisite for their reliable deployment in medicine.

In a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-of-distribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates on two use cases how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. This is an important contribution to making the use of AI systems safer and more reliable.

The paper in detail:
“Interval Neural Networks as Instability Detectors for Image Reconstructions”

Authors:
Jan Macdonald, Maximilian März, Luis Oala, Wojciech Samek

Abstract:
This work investigates the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Indeed, in a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-ofdistribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates, how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. Such an ability is crucial to ensure a safe use of deep learning-based methods for medical image reconstruction.

Publication:
In: Bildverarbeitung für die Medizin 2021. Informatik aktuell. Springer Vieweg, Wiesbaden.
https://doi.org/10.1007/978-3-658-33198-6_79