Bipartite Matching for Crowd Counting with Point Supervision

Authors: Hao Liu, Qiang Zhao, Yike Ma, Feng Dai

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four datasets show that our method achieves state-of-the-art performance and performs better crowd localization. In this section, we first describe the details of experiment settings. Then we compare our proposed method with recent state-of-the-art methods on four public challenging datasets. Finally, ablation studies are further conducted to demonstrate the effectiveness of each component of our method.
Researcher Affiliation Collaboration 1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Artificial Intelligence on Electric Power System Joint Laboratory of SGCC, Global Energy Interconnection Research Institute Co., Ltd., Beijing, China
Pseudocode No The paper describes the steps of its method and mentions algorithms like Hungarian algorithm, but it does not include a formally structured pseudocode block or algorithm listing.
Open Source Code No The paper does not provide any link or explicit statement indicating that the source code for the methodology described (BM-Count) is publicly available.
Open Datasets Yes We evaluate our method on four crowd counting datasets: UCF-QNRF [Idrees et al., 2018], NWPU [Wang et al., 2020b], Shanghai Tech [Zhang et al., 2016] and JHUCROWD++ [Sindagi et al., 2020].
Dataset Splits No The paper mentions 'random crops are taken for training and crop sizes are based on the datasets' but does not explicitly provide a breakdown of training, validation, and test splits (e.g., percentages or counts) or reference predefined splits with proper citations.
Hardware Specification Yes And the network is trained with batch size of 10 following DM-Count on an NVIDIA 2080Ti GPU.
Software Dependencies No The paper mentions 'Adam optimizer is applied' but does not provide specific version numbers for software dependencies such as Python, PyTorch, TensorFlow, or CUDA.
Experiment Setup Yes Adam optimizer is applied with fixed learning rate at 1e-5 and weight decay of 1e-4. And the network is trained with batch size of 10 following DM-Count on an NVIDIA 2080Ti GPU. ... random crops are taken for training and crop sizes are based on the datasets. Specifically, 256 for Shanghai Tech Part A, 512 for Shanghai Tech Part B and UCF-QNRF, 384 for NWPU and JHU-CROWD++.