Error-Aware Density Isomorphism Reconstruction for Unsupervised Cross-Domain Crowd Counting
Authors: Yuhang He, Zhiheng Ma, Xing Wei, Xiaopeng Hong, Wei Ke, Yihong Gong1540-1548
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four benchmark datasets demonstrate the superiority of the proposed method and ablation studies investigate the efficiency and robustness. |
| Researcher Affiliation | Academia | 1 College of Artificial Intelligence, Xi an Jiaotong University 2 School of Software Engineering, Xian Jiaotong University 3School of Cyber Science and Engineering, Xi an Jiaotong University 4Research Center for Artificial Intelligence, Peng Cheng Laboratory |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/GehenHe/EDIREC-Net. |
| Open Datasets | Yes | UCF-QNRF (Idrees et al. 2018). UCSD (Chan, Liang, and Vasconcelos 2008). MALL (Chen et al. 2012). VENICE (Liu, Salzmann, and Fua 2019a). FDST (Fang et al. 2019). |
| Dataset Splits | Yes | UCSD (Chan, Liang, and Vasconcelos 2008): we use the frames from 601 to 1400 for training and the remaining 1200 frames for testing. MALL (Chen et al. 2012): we use the first 800 frames for training and keep the remaining 1,200 frames for testing. VENICE (Liu, Salzmann, and Fua 2019a): we use 80 images from a single scenario for training and keep the rest images from the other 3 scenarios for testing. FDST (Fang et al. 2019): There are 60 video sequences re used for training and the rest are used for testing. UCF-QNRF (Idrees et al. 2018): 1,201 images for training and the remaining 334 images are used for testing. For ablation studies, we conduct experiments on the validations sets (100 images randomly sampled from the training set) of the MALL dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions using VGG-19 architecture, Bayesian Loss, Adam optimizer, and exponential moving average, but does not provide specific version numbers for any software libraries or frameworks like PyTorch or TensorFlow. |
| Experiment Setup | Yes | The backbone and the density map header of ϕt( ; Θt) are initialized using Θs and the erroneousness estimation header is randomly initialized. During training, Θt is updated using an Adam optimizer (Kingma and Ba 2014) with a learning rate of 10-5, while Θa is updated using an exponential moving average (Tarvainen and Valpola 2017): Θa = αΘa +Θt, where α is the parameter of moving step. In this paper, we fix α = 0.999. The time interval parameter d is fix to d = 3 according to the experimental results in Section 4.5. |