Position: Explain to Question not to Justify

Authors: Przemyslaw Biecek, Wojciech Samek

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This position paper argues that the area of RED XAI is currently under-explored, i.e., more methods for explainability are desperately needed to question models (e.g., extract knowledge from well-performing models as well as spotting and fixing bugs in faulty models), and the area of RED XAI hides great opportunities and potential for important research necessary to ensure the safety of AI systems.
Researcher Affiliation Academia 1MI2.AI, University of Warsaw, Poland 2MI2.AI, Warsaw University of Technology, Poland 3Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Germany 4Department of Electrical Engineering and Computer Science, Technical University of Berlin, Germany 5BIFOLD Berlin Institute for the Foundations of Learning and Data, Germany.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper is a position paper and does not propose a new methodology or system for which source code would be provided. It references existing open-source libraries (e.g., sklearn, iNNvestigate, DALEX, Quantus) as examples of tools, but does not offer its own code.
Open Datasets No The paper is a position paper and does not describe experiments conducted by the authors. While it refers to datasets in general terms (e.g., 'medical image classification tasks', 'Covid-19 data' for illustrative figures adapted from other works) and mentions benchmarks like FICO challenge, it does not provide access information for data used in its own research as it does not conduct empirical studies.
Dataset Splits No The paper does not describe any experiments or data analysis conducted by the authors, therefore no dataset split information is provided.
Hardware Specification No The paper does not describe any experiments conducted by the authors, therefore no hardware specifications are provided.
Software Dependencies No The paper does not describe any experiments conducted by the authors. While it mentions existing software tools and libraries (e.g., 'sklearn', 'iNNvestigate', 'DALEX', 'Quantus') as examples relevant to the field, it does not list them as dependencies for its own research work or provide version numbers.
Experiment Setup No The paper does not describe any experiments conducted by the authors, therefore no experimental setup details like hyperparameters or training configurations are provided.