Distribution Matching for Crowd Counting

Authors: Boyu Wang, Huidong Liu, Dimitris Samaras, Minh Hoai Nguyen

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In terms of Mean Absolute Error, DM-Count outperforms the previous state-of-the-art methods by a large margin on two large-scale counting datasets, UCF-QNRF and NWPU, and achieves the state-of-the-art results on the Shanghai Tech and UCF-CC50 datasets. DM-Count reduced the error of the state-of-the-art published result by approximately 16%. 5 Experiments In this section, we describe experiments on toy data and on benchmark crowd counting datasets. Quantitative Results. Tables 1 and 2 compare the performance of DM-Count against various methods. 5.3 Ablation Studies
Researcher Affiliation Academia Boyu Wang Huidong Liu Dimitris Samaras Minh Hoai Department of Computer Science, Stony Brook University, Stony Brook, NY 11790 {boywang, huidliu, samaras, minhhoai}@cs.stonybrook.edu
Pseudocode No The paper describes mathematical formulations and algorithms (e.g., Sinkhorn algorithm) but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/cvlab-stonybrook/DM-Count.
Open Datasets Yes We perform experiments on four challenging crowd counting datasets: UCF-QNRF [15], NWPU [51], Shanghai Tech [60], and UCF-CC-50 [14]. It is worth noting that the NWPU dataset is the largest-scale and most challenging crowd counting dataset publicly available today. The ground truth counts for test images are not released, and the results on the test set must be obtained by submitting to the evaluation server at https://www.crowdbenchmark.com/nwpucrowd.html.
Dataset Splits No While Table 2 includes a 'Validation set' column, the paper does not specify the exact split percentages or sample counts for training, validation, and test sets. It states: 'More detailed dataset descriptions, implementation details and experimental settings can be found in the supplementary material.'
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks like Python, PyTorch, or TensorFlow).
Experiment Setup Yes In all experiments, we set λ1 = 0.1, λ2 = 0.01, and the Sinkhorn entropic regularization parameter to 10. The number of Sinkhorn iterations is set to 100. Hyper-parameter study. We tune λ1 and λ2 in DM-Count on the UCF-QNRF dataset. First, we fix λ1 to 0.1 and tune λ2 from 0.01, 0.05 to 0.1. The MAE varies from 85.6, 87.8 to 88.5. As λ2 = 0.01 achieves the best result, we fix λ2 to 0.01 and tune λ1 from 0.01, 0.05 to 0.1. The MAE varies from 87.2, 86.2 to 85.6. Thus, we set λ1 = 0.1, λ2 = 0.01 and use them on all the datasets.