Crowd Counting with Decomposed Uncertainty

Authors: Min-hwan Oh, Peder Olsen, Karthikeyan Natesan Ramamurthy11799-11806

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that the proposed uncertainty quantification method provides additional insight to the crowd counting problem and is simple to implement. We also show that our proposed method exhibits state-of-the-art performances in many benchmark crowd counting datasets. Experiments In this section, we first introduce datasets and experiment details. We give the evaluation results and perform comparisons between the proposed method with recent state-of-the-art methods.
Researcher Affiliation Collaboration Min-hwan Oh,1 Peder Olsen,2 Karthikeyan Natesan Ramamurthy3 1Columbia University, New York, NY 10025 2Microsoft, Azure Global Research, Redmond, WA 98052 3IBM Research, Yorktown Heights, NY 10598
Pseudocode Yes Algorithm 1 Decomposed Uncertainty using Bootstrap. Algorithm 2 Uncertainty Recalibration.
Open Source Code No The paper does not provide any explicit statement about releasing its own source code or a link to a code repository.
Open Datasets Yes Performance comparisons We evaluate our method on four publicly available crowd counting datasets: Shanghai Tech (Zhang et al. 2016), UCF-CC 50 (Idrees, Soomro, and Shah 2015), and UCF-QNRF (Idrees et al. 2018).
Dataset Splits Yes Shanghai Tech. ... We use the training and testing splits provided by the authors: 300 images for training and 182 images for testing in Part A; 400 images for training and 316 images for testing in Part B. UCF-CC 50. ... We use 5-fold crossvalidation to evaluate the performance of the proposed method. UCF-QNRF. ... The training dataset contains 1,201 images, with which we train our model. ... Then, we test our model on the remaining 334 images in the test dataset.
Hardware Specification No The paper mentions experiencing 'memory issues in GPU while training' but does not specify the model or any other details about the hardware used for experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the experiments.
Experiment Setup Yes We initialize the front-end layers (the first 10 convolutional layers) in our model with the corresponding part of a pretrained Res Net-50 (He et al. 2016). For the rest of the parameters, we initialize with a Gaussian distribution with mean 0 and standard deviation 0.01. Adam optimizer (Kingma and Ba 2014) with a learning rate of 10 5 is applied to train the model. For all experiments, we used K = 10 heads for DUBNet.