Model Agnostic Interpretability for Multiple Instance Learning

Authors: Joseph Early, Christine Evers, SArvapali Ramchurn

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our model-agnostic methods to seven MIL datasets. In this section, we detail the evaluation strategy (Section 4.1), the datasets (Section 4.2), models (Section 4.3), and results (Section 4.4).
Researcher Affiliation Academia Joseph Early, Christine Evers & Sarvapali Ramchurn Agents, Interaction and Complexity Group Department of Electronics and Computer Science University of Southampton {J.A.Early, C.Evers, sdr1}@soton.ac.uk
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Source code for this project is available at https://github.com/JAEarly/MILLI.
Open Datasets Yes The annotated SIVAL dataset was downloaded from the publicly accessible page: http://pages.cs.wisc.edu/ bsettles/data/. The MNIST dataset was access directly from the Py Torch Python library: https://pytorch.org/vision/stable/datasets.html#mnist. The CRC dataset was downloaded from the publicly accessible page: https://warwick.ac.uk/fac/cross_fac/tia/data/crchistolabelednucleihe/. The Musk dataset was downloaded from the publicly accessible page: https://archive.ics.uci.edu/ml/datasets/Musk+%28Version+2%29 The Tiger, Elephant and Fox datasets were downloaded from the publicly accessible page: http://www.cs.columbia.edu/ andrews/mil/datasets.html
Dataset Splits Yes The dataset was separated into train, validation, and test data using an 80/10/10 split. This was done with stratified sampling in order to maintain the same data distribution across all splits.
Hardware Specification Yes Some local experiments were carried out on a Dell XPS Windows laptop, utilising a Ge Force GTX 1650 graphics card with 4GB of VRAM. GPU support for machine learning was enabled through CUDA v11.0. Other longer running experiments, such as hyperparameter tuning, were carried out on a remote GPU node utilising a Volta V100 Enterprise Compute GPU with 16GB of VRAM.
Software Dependencies Yes All code for this work was implemented in Python 3.8, using the Py Torch library for the machine learning functionality. Hyperparameter tuning was carried out using the Optuna libary.
Experiment Setup Yes We discuss our choice of hyperparameters in Appendix A.5. ... The training procedure was the same as for the SIVAL experiments: a batch size of one, early stopping with a patience of ten, and a maximum of 100 epochs. The training hyperparamater details are given in Table A12.