Interventional Multi-Instance Learning with Deconfounded Instance-Level Prediction

Authors: Tiancheng Lin, Hongteng Xu, Canqian Yang, Yi Xu1601-1609

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on pathological image analysis demonstrate that our IMIL method substantially reduces false positives and outperforms state-of-the-art MIL methods.
Researcher Affiliation Collaboration 1 Shanghai Key Lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University 2 Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 3 Gaoling School of Artificial Intelligence, Renmin University of China 4 Beijing Key Laboratory of Big Data Management and Analysis Methods 5 JD Explore Academy
Pseudocode No The paper describes algorithmic steps in text but does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes The datasets used in our experiments are Digset Path (Li et al. 2019a)2 and Camelyon16 (Bejnordi et al. 2017)3, both of which have bag-level and instance-level labels for each image and its patches, respectively. 2https://digestpath2019.grand-challenge.org/ 3https://camelyon16.grand-challenge.org/
Dataset Splits Yes We evaluate the instance-level performance of each method based on 5-fold cross-validation, and the measurements include Area Under Curve (AUC), accuracy (ACC), F1-score, recall (REC) and precision (PRE).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using 'Res Net-18' and 'Adam optimizer' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Adam optimizer is used with an initial learning rate of 0.001, and the batch size is set to 64. We run 50 epochs in total and decay the learning rate with the cosine decay schedule (Loshchilov and Hutter 2016). For our method, the hyperparameters are m = 0.5, τ = 0.05 and T = 0.95 by default.