Several BIFOLD research groups actively contribute to the program of the 48th Annual Conference on Neural Information Processing Systems (NeurIPS 2024), held in Vancouver, Canada, from December 10-15, 2024. The Machine Learning group of BIFOLD Co-Director Klaus-Robert Müller alone is represented at the conference with 8 puplications. NeurIPS covers a wide range of topics with machine learning and neuroscience, including cognitive science, psychology, computer vision, statistical linguistics, and information theory.
BIFOLD research:
Machine Learning Group
- Do Histopathological Foundation Models Eliminate Batch Effects? A Comparative Study
- Authors: Jonah Kömen, Hannah Marienwald, Jonas Dippel, Julius Hense
- Preprint: https://arxiv.org/abs/2411.05489
- Presented at: NeurIPS workshop “AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond”
- Presented by: Jonah Kömen, Hannah Marienwald, Julius Hense
- Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling
- Authors: Yuanqi Du, Michael Plainer, Rob Brekelmans, Chenru Duan, Frank Noé, Carla P. Gomes, Alán Aspuru-Guzik, Kirill Neklyudov
- Preprint: https://arxiv.org/abs/2410.07974
- Code: https://github.com/plainerman/variational-doob
- Presented by: Michael Plainer
- Federated Learning over Connected Modes
- Authors: Dennis Grinwald, Philipp Wiesner, Shinichi Nakajima
- Preprint: https://arxiv.org/abs/2403.03333
- Poster: https://nips.cc/virtual/2024/poster/95719
- Generative Fractional Diffusion Models
- Authors: Gabriel Nobis, Maximilian Springenberg, Marco Aversa, Michael Detzel, Rembert Daems, Roderick Murray-Smith, Shinichi Nakajima, Sebastian Lapuschkin, Stefano Ermon, Tolga Birdal, Manfred Opper, Christoph Knochenhauer, Luis Oala, Wojciech Samek
- Preprint: https://arxiv.org/abs/2310.17638
- MambaLRP: Explaining Selective State Space Sequence Models
- Authors: Farnoush Rezaei Jafari, Grégoire Montavon, Klaus-Robert Müller, Oliver Eberle
- Preprint: https://arxiv.org/abs/2406.07592
- Poster: https://nips.cc/virtual/2024/poster/96794
- When Does Perceptual Alignment Benefit Vision Representations?
- Authors: Shobhita Sundaram, Stephanie Fu, Lukas Muttenthaler, Netanel Y. Tamir, Lucy Chai, Simon Kornblith, Trevor Darrell, Phillip Isola
- Preprint: https://arxiv.org/abs/2410.10817
- Blog post: https://ssundaram21.github.io/repalignment/
- Presented by: Shobhita Sundaram and Stephanie Fu
- Main conference paper
- xCG: Explainable Cell Graphs for Survival Prediction in Non-Small Cell Lung Cancer
- Authors: Marvin Sextro, Gabriel Dernbach, Kai Standvoss, Simon Schallenberg, Frederick Klauschen, Klaus-Robert Müller, Maximilian Alber, Lukas Ruff
- Preprint: https://arxiv.org/abs/2411.07643v1
- Code: https://github.com/marvinsxtr/explainable-cell-graphs/
- Presented at: Workshop “Machine Learning for Health (ML4H) symposium”
- Presented by: Marvin Sextro
- xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology
- Authors: Julius Hense, Mina Jamshidi Idaji, Oliver Eberle, Thomas Schnake, Jonas Dippel, Laure Ciernik, Oliver Buchstab, Andreas Mock, Frederick Klauschen, Klaus-Robert Müller
- Preprint: https://arxiv.org/abs/2406.04280
- Code: https://github.com/tubml-pathology/xMIL
- Presented by: Julius Hense, Mina Jamshidi Idaji
Further contributions linked to BIFOLD:
- CoSy: Evaluating Textual Explanations of Neurons
- Authors: Laura Kopf, Philine Lou Bommer, Anna Hedström, Sebastian Lapuschkin, Marina M.-C. Höhne, Kirill Bykov
- Preprint: https://arxiv.org/abs/2405.20331
- Code: https://github.com/lkopf/cosy
- Presented by: Laura Kopf
- Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
- Authors: Dilyara Bareeva, Galip Ümit Yolcu, Anna Hedström, Niklas Schmolenski, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
- Preprint: https://arxiv.org/abs/2410.07158
- Presented at: ATTRIB Workshop
- Presented by: Galip Ümit Yolcu
- Breaking the curse of dimensionality in structured density estimation
- Authors: Robert A. Vandermeulen, Wai Ming Tai, Bryon Aragam
- Preprint: https://arxiv.org/abs/2410.07685
- Disclaimer: Much of this work was conducted at the Berlin Institute for the Foundations of Learning and Data (BIFOLD), Technische Universität Berlin.
- Explainable AI needs formal notions of explanation correctness
- Authors: Stefan Haufe, Rick Wilming, Benedict Clark, Rustam Zhumagambetov, Danny Panknin, Ahcène Boubekki
- Preprint: https://arxiv.org/abs/2409.14590
- Presented by: Stefan Haufe
- The effect of whitening on explanation performance
- Authors: Benedict Clark, Stoyan Karastoyanov, Rick Wilming, Stefan Haufe
- Presented by: Benedict Clark