Cell Graph Transformer for Nuclei Classification
Authors: Wei Lou, Guanbin Li, Xiang Wan, Haofeng Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results suggest that the proposed cell graph transformer with topology-aware pretraining significantly improves the nuclei classification results, and achieves the stateof-the-art performance. Code and models are available at https://github.com/lhaof/CGT |
| Researcher Affiliation | Academia | 1Shenzhen Research Institute of Big Data, Shenzhen, China 2The Chinese University of Hong Kong, Shenzhen, China 3School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 4Guang Dong Province Key Laboratory of Information Security Technology |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code and models are available at https://github.com/lhaof/CGT |
| Open Datasets | Yes | We utilize four nuclei classification datasets: Pan Nuke (Gamper et al. 2020), Lizard (Graham et al. 2021), Nu CLS (Amgad et al. 2022), and BRCA-M2C (Abousamra et al. 2021). |
| Dataset Splits | Yes | The data split and more details are in the supplementary material. |
| Hardware Specification | Yes | The pretraining strategy and the training of CGT are run for 150 and 50 epochs, respectively, with the Adam optimizer in an NVIDIA A-100 GPU. |
| Software Dependencies | No | The implementation is based on Py Torch (Paszke et al. 2017) and Py Torch Geometric library (Fey and Lenssen 2019). Specific version numbers for these libraries are not provided. |
| Experiment Setup | Yes | For the proposed CGT, the encoder and decoder of the feature extractor have four layers and three layers, respectively. The CGT encoder contains four transformer layers. For the pretraining strategy, the GCN is built of two GENConv (Li et al. 2020) layers. Our results are reported as the average result of training with three different random seeds. The dimensions of type markers and link markers are 64 and 16. The number of edges of each node is 4. The pretraining strategy and the training of CGT are run for 150 and 50 epochs, respectively, with the Adam optimizer in an NVIDIA A-100 GPU. The initial learning rates for pretraining and training are 10 4 and 10 5, respectively. |