UniCell: Universal Cell Nucleus Classification via Prompt Learning

Authors: Junjia Huang, Haofeng Li, Xiang Wan, Guanbin Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that the proposed method effectively achieves the state-of-the-art results on four nucleus detection and classification benchmarks.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen, China 3 Guang Dong Province Key Laboratory of Information Security Technology huangjj77@mail2.sysu.edu.cn, {lhaof,wanxiang}@sribd.cn, liguanbin@mail.sysu.edu.cn
Pseudocode No The paper describes the proposed method in prose and via architectural diagrams (e.g., Figure 2 and Figure 3), but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and models are available at https://github.com/lhaof/Uni Cell
Open Datasets Yes We conduct experiments on four datasets: Co NSe P (Graham et al. 2019) ... Mo Nu SAC (Verma et al. 2021) ... Lizard (Graham et al. 2021) ... OCELOT (Ryu et al. 2023)
Dataset Splits Yes For OCELOT, we only utilize its nuclues annotations and split the dataset into training and testing sets with a ratio of 7:3. For other datasets, we use their default/official data split.
Hardware Specification Yes All models are trained and tested with an NVIDIA A100 (80GB) GPU.
Software Dependencies No The paper mentions using a "DETR-like structure" and "Adam W optimizer" and a "Swin-B... backbone" but does not specify version numbers for general software libraries or frameworks (e.g., PyTorch, TensorFlow, CUDA).
Experiment Setup Yes Adam W optimizer is used to train Uni Cell with initial learning rates of 1e 4 and 1e 5 for backbone and other modules, respectively. For data argumentation, we apply the random flip, random crop and multi-scale training with sizes between 600 and 800, and infer images after resizing to 800 800. All models are trained and tested with an NVIDIA A100 (80GB) GPU. The number of training iterations is set to 160k and after training for 160k iterations, we choose the final model for evaluation. We use the SAHI (Akyon, Altinuc, and Temizel 2022) scheme to slice the images into fixed-size patches as training samples and adopt sliding-window prediction during inference.