SagaNet: A Small Sample Gated Network for Pediatric Cancer Diagnosis
Authors: Yuhan Liu, Shiliang Sun
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate this framework on a challenging small sample SRBCTs dataset, whose classification is difficult even for professional pathologists. The proposed model shows the best performance compared with state-of-the-art deep models and generalization on another pathological dataset, which illustrates the potentiality of deep learning applications in difficult small sample medical tasks. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, East China Normal University, Shanghai, China. Correspondence to: Shiliang Sun <slsun@cs.ecnu.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Mask Construction |
| Open Source Code | No | The paper does not provide any links or explicit statements about making its source code publicly available. |
| Open Datasets | Yes | We train and evaluate our model on a small sample SRBCTs dataset... Furthermore, to show the generalization of the model, the comparison results on another dataset Break Hist (Spanhol et al., 2015) are also provided. |
| Dataset Splits | Yes | Thus, we split the dataset with the constraint that the patients of the training set, the validation set, and the test set cannot overlap each other and the number of patches in the last two parts is within the range [45, 55]. Finally, we randomly take 20 splits. |
| Hardware Specification | Yes | One GPU (NVIDIA Ge Force RTX 2080 Ti) is used. |
| Software Dependencies | No | The paper mentions software components like "SGD optimizer", "Adam optimizer", "SENet", "Dense Net", "ELU", but it does not specify exact version numbers for any libraries, frameworks, or development environments used. |
| Experiment Setup | Yes | When constructing masks for input images, we use the SGD optimizer with the learning rate of 0.05 when training the SENet network and set the number of minimum segmented areas as 7. In the process of merging segmented areas, we set Tlow = 78, Thigh = 158, Tred = 37 and Tsmooth = 35. ...we configure the initial learning rate to 5e 6 with the Adam optimizer and multiply it by 0.7 at the end of each epoch. The whole number of training epochs is set to 300, and an early stopping mechanism is applied with the patience of 20 epochs. |