Hybrid Graph Neural Networks for Crowd Counting

Authors: Ao Luo, Fan Yang, Xin Li, Dong Nie, Zhicheng Jiao, Shangchen Zhou, Hong Cheng11693-11700

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we experimentally validate our Hy Gnn on four public counting benchmarks (i.e., Shanghai Tech Part A, Shanghai Tech Part B, UCF CC 50 and UCF QNRF). First, we conduct an ablation experiment to prove the effectiveness of our hybrid graph model and the multi-task learning. Then, our proposed Hy Gnn is evaluated on all of these public benchmarks, while comparing the performance against other state-of-the-art approaches.
Researcher Affiliation Collaboration Ao Luo,1 Fan Yang,2 Xin Li,2 Dong Nie,3 Zhicheng Jiao,4 Shangchen Zhou,5 Hong Cheng1 1Center for Robotics, University of Electronic Science and Technology of China 2Inception Institute of Artificial Intelligence 3Department of Computer Science, University of North Carolina at Chapel Hill 4University of Pennsylvania 5Nanyang Technological University
Pseudocode No The paper describes the model architecture and processes through text and diagrams, but does not include a dedicated pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statement about providing open-source code or a link to a code repository for the described methodology.
Open Datasets Yes We use Shanghai Tech (Zhang et al. 2016), UCF CC 50 (Idrees et al. 2013) and UCF QNRF (Idrees et al. 2018b) for benchmarking our Hy Gnn.
Dataset Splits Yes UCF QNRF is the largest dataset to date, which contains 1,535 images which are divided into train and test sets of 1,201 and 3,34 images respectively.
Hardware Specification No The paper mentions using a 'truncated VGG' as a backbone network but does not specify any hardware details like GPU models, CPU types, or memory used for experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' and VGG-16 architecture, but does not provide specific software version numbers for libraries or frameworks used in implementation, such as PyTorch, TensorFlow, or Python.
Experiment Setup Yes We use Adam optimizer with an initial learning rate 10 4. We set the momentum to 0.9, the weight decay to 10 4 and the batchsize to 8. For data augmentation, the training images and the corresponding groundtruths are randomly flipped and cropped from different locations to the size of 400 400.