Memory-Efficient Prompt Tuning for Incremental Histopathology Classification

Authors: Yu Zhu, Kang Li, Lequan Yu, Pheng Ann Heng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have extensively evaluated our framework with two histopathology tasks, i.e., breast cancer metastasis classification and epithelium-stroma tissue classification, where our approach yielded superior performance and memory efficiency over the competing methods.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Department of Statistics and Actuarial Science, The University of Hong Kong 3Department of Mechanical Engineering, The University of Hong Kong
Pseudocode Yes Algorithm 1: Training Procedures
Open Source Code No The paper does not provide any links to source code repositories or explicitly state that the code for their methodology is open-source or available.
Open Datasets Yes We adopted the Camelyon17 dataset (Bandi et al. 2018) which provided the labels of the presence or absence of breast cancer. [...] We utilized four public datasets, including 615 images from VGH (Beck et al. 2011) (Domain 1), 671 images from NKI (Beck et al. 2011) (Domain 2), 1296 patches from IHC (Linder et al. 2012) (Domain 3), and 26,437 patches from NCH (Kather et al. 2019) (Domain 4).
Dataset Splits Yes We set the total time step as 4, where Domain 4 currently arrives, Domain 1-3 are previously delivered, and Domain 5 remains unseen to the model. [...] Here, we set the total time step as 3, where Domain 3 has currently arrived, Domain 1 and 2 are previously delivered and Domain 4 remains unseen during model training.
Hardware Specification No The paper does not specify any particular hardware components such as GPU or CPU models, memory details, or cloud computing instances used for running the experiments.
Software Dependencies No The paper mentions using "Vi T-B/16" as a feature extractor and "Adam optimizer", but does not provide specific version numbers for software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes We adopted the Vi T-B/16 (Dosovitskiy et al. 2020) as our feature extractor fb. We employ the Adam optimizer with the learning rate of 7.5e 4 in the first time step and the learning rate of 1e 4 for the subsequent time steps. Each experiment is repeated 5 times to avoid random bias.