Modeling Noisy Annotations for Crowd Counting
Authors: Jia Wan, Antoni Chan
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present experiments using our loss in (10) for training density map estimators. The experiments are conducted on 6 datasets: NWPU-Crowd [35], JHU-CROWD++ [36], UCF-QNRF [25], Shanghai_Tech [11], UCSD [6], and Mall [37]. |
| Researcher Affiliation | Academia | Jia Wan Antoni B. Chan Department of Computer Science City University of Hong Kong jiawan1998@gmail.com, abchan@cityu.edu.hk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a repository link or an explicit statement of code release. |
| Open Datasets | Yes | The experiments are conducted on 6 datasets: NWPU-Crowd [35], JHU-CROWD++ [36], UCF-QNRF [25], Shanghai_Tech [11], UCSD [6], and Mall [37]. |
| Dataset Splits | Yes | NWPU-CROWD is a large-scale benchmark for crowd counting which consists of 3,109 training images, 500 validation images and 1,500 testing images. JHU-CROWD++ has 4,371 images (2,722, 500, and 1,600 for train, val, test). UCF-QNRF contains 1,535 high-resolution images (1,201/334 for training/validation). Shanghai_Tech dataset consists of Part A and Part B. Part A has 482 and 300 images for training and evaluation, while Part B has 716 and 400 images for training and testing. For the datasets without a validation set, we use 10% of the images for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'VGG19', 'CSRNet', 'MCNN' backbones, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer for training with learning rate 10 5. The regularization weight λ is set to 0.1. |