DPA-P2PNet: Deformable Proposal-Aware P2PNet for Accurate Point-Based Cell Detection
Authors: Zhongyi Shui, Sunyi Zheng, Chenglu Zhu, Shichuan Zhang, Xiaoxuan Yu, Honglin Li, Jingxiong Li, Pingyi Chen, Lin Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three benchmarks and a large-scale and real-world interval dataset demonstrate the superiority of our proposed models over the state-of-the-art counterparts. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Zhejiang University 2School of Engineering, Westlake University |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Codes and pre-trained weights are available at https://github.com/windygoo/DPA-P2PNet. |
| Open Datasets | Yes | In this study, we evaluate the advantages of DPA-P2PNet over the state-of-the-art counterparts on three histopathology datasets with varied staining types, including the HE stained Co NSe P (Graham et al. 2019), IHC Ki-67 stained BCData (Huang et al. 2020) datasets, and an internal IHC PD-L1 dataset. To validate the efficacy of our proposed m Fo V DPA-P2PNet, we conduct comprehensive experiments on the OCELOT (Ryu et al. 2023) dataset... |
| Dataset Splits | Yes | We divide the PD-L1 and OCELOT dataset into training, validation, and test subsets at a ratio of 6:2:2. To avoid information leaking among the subsets, we randomly split the dataset per WSI, ensuring that different patches from the same WSI are not included in multiple subsets. |
| Hardware Specification | Yes | All models are trained on NVIDIA A100 GPUs. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python version, library versions) were explicitly stated. |
| Experiment Setup | Yes | The interval of pre-defined point proposals is set to 8 pixels on the Co NSe P dataset while 16 pixels on the other three datasets. By default, we use Res Net50 (He et al. 2016) and FPN (Lin et al. 2017) as the trunk and neck networks, respectively. All MLPs are structured as FC-Re Lu-Dropout-FC. With the same label assignment scheme and loss functions as P2PNet (Song et al. 2021), we adopt the Adam W optimizer with weight decay 1e-4 to optimize our proposed models. During the training stage, data augmentations including random scaling, shifting and flipping are applied on the fly. |