Additive MIL: Intrinsically Interpretable Multiple Instance Learning for Pathology
Authors: Syed Ashar Javed, Dinkar Juyal, Harshith Padigela, Amaro Taylor-Weiner, Limin Yu, Aaditya Prakash
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform various experiments to show the benefits of using Additive MIL models for interpretability in pathology problems. |
| Researcher Affiliation | Industry | Syed Ashar Javed Path AI Inc ashar.javed@pathai.com Dinkar Juyal Path AI Inc dinkar.juyal@pathai.com Harshith Padigela Path AI Inc harshith.padigela@pathai.com Amaro Taylor-Weiner Path AI Inc amaro.taylor@pathai.com Limin Yu Path AI Inc limin.yu@pathai.com Aaditya Prakash Path AI Inc adi.prakash.ml@gmail.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] |
| Open Datasets | Yes | The first problem is the prediction of cancer subtypes in non-small cell lung carcinoma (NSCLC) and renal cell carcinoma (RCC), both of which use the TCGA dataset [45]. The second problem is the detection of metastasis in breast cancer using the Camelyon16 dataset [6]. |
| Dataset Splits | Yes | Both TCGA datasets were split into 60/15/25 (train/val/test) as done previously [36] while ensuring no data leakage at a case level. |
| Hardware Specification | Yes | All training and inference runs were done on Quadro RTX 8000, and it takes 3 to 4 hours to train the model with four GPUs. |
| Software Dependencies | No | The paper mentions 'ADAM optimizer' and 'Shufflenet' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | For training the models, a bag size of 48-1600 patches and batch size of 16-64 was experimented with and the best one chosen using cross-validation. [...] the entire model was trained with ADAM optimizer [21] and a learning rate of 1e-4. |