Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis
Authors: Honglin Li, Yunlong Zhang, Pingyi Chen, Zhongyi Shui, Chenglu Zhu, Lin Yang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method, Long-contextual MIL (Long MIL), is evaluated through extensive experiments on various WSI tasks to validate its superiority in: 1) overall performance, 2) memory usage and speed, and 3) extrapolation ability compared to previous methods. |
| Researcher Affiliation | Academia | Honglin Li1,3 Yunlong Zhang1,3 Pingyi Chen1,3 Zhongyi Shui1,3 Chenglu Zhu2,3 Lin Yang2,3 1 Zhejiang University 2 Research Center for Industries of the Future and 3 School of Engineering, Westlake University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks clearly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Our code will be available at https://github.com/invoker-LL/Long-MIL. |
| Open Datasets | Yes | We use four datasets to evaluate our method for both tumor subtyping and survival prediction. For data details and pre-processing, please see Appendix A.4. BReAst Carcinoma Subtyping (BRACS) [4]... The Cancer Genome Atlas Breast Cancer (TCGA-BRCA) [68, 55]... TCGA-COADREAD... TCGA-STAD... |
| Dataset Splits | Yes | For TCGA-BRCA, we perform 10-fold cross-validation with the same data split adopted in HIPT [9]. Besides, the dataset BRACS is officially split into training, validation and testing, thus the experiment is conducted 5-times with different random seeds. |
| Hardware Specification | Yes | We train our model with Py Torch on a RTX-3090 GPU |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We train our model with Py Torch on a RTX-3090 GPU, with a WSI-level batchsize of 1, learning rate of 1e-4, and weight decay of 1e-2. We add positional encoding into the framework, please check our code for details. |