Transformer-Based Video-Structure Multi-Instance Learning for Whole Slide Image Classification
Authors: Yingfan Ma, Xiaoyuan Luo, Kexue Fu, Manning Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three public WSI datasets show that our proposed method outperforms existing SOTA methods in both WSI classification and positive region detection. |
| Researcher Affiliation | Academia | Yingfan Ma1 2, Xiaoyuan Luo1 2, Kexue Fu3, Manning Wang1 2 * 1Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China 2Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai 200032, China 3Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Jinan, China 22211010089@m.fudan.edu.cn, {19111010030, mnwang}@fudan.edu.cn, fukx@sdas.org |
| Pseudocode | No | The paper describes its methods in prose and through figures, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements about releasing its source code nor does it provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | Extensive experiments on three public WSI datasets: CAMELYON16, PANDA, and TCGA-NSCLC. ... CAMELYON16 (Bejnordi et al. 2017), PANDA (Bulten et al. 2022). ... TCGA-NSCLC1 ... 1http://www.cancer.gov/tcga |
| Dataset Splits | No | The paper mentions using 'training set' and 'test set' and conducting 'ablation studies' but does not provide specific percentages, sample counts, or explicit methodology for how the data was split into training, validation, and test sets for the main experiments. |
| Hardware Specification | Yes | All experiments are conducted using 2 A100s. |
| Software Dependencies | No | The paper mentions using an 'Adam optimizer' and 'Res Net18' as an encoder but does not provide specific version numbers for programming languages or software libraries like PyTorch or TensorFlow. |
| Experiment Setup | Yes | During the training process, we utilized the cross-entropy loss, with an Adam optimizer having a learning rate of 1e-4 and weight decay of 1e-4. |