Explicit Invariant Feature Induced Cross-Domain Crowd Counting

Authors: Yiqing Cai, Lianggangxu Chen, Haoyue Guan, Shaohui Lin, Changhong Lu, Changbo Wang, Gaoqi He

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Various experiments show our method achieves state-of-the-art performance on the standard benchmarks. Code is available at https://github.com/caiyiqing/IF-CKT. ... To demonstrate the superiority of our method, we conduct extensive experiments on four datasets, including Shanghai Tech B dataset (Zhang et al. 2016b), World Expo 10 dataset (Zhang et al. 2016a), UCF-QNRF dataset (Idrees et al. 2018) and MALL dataset (Chen et al. 2012)
Researcher Affiliation Academia 1School of Computer Science and Technology, East China Normal University, Shanghai, China 2School of Mathematical Sciences, East China Normal University, Shanghai, China 3Johns Hopkins University, Mason Hall, USA 4Innovation Center for AI and Drug Discovery, East China Normal University, Shanghai, China 5Chongqing Key Laboratory of Precision Optics, Chongqing Institute of East China Normal University, Chongqing, China
Pseudocode No The paper describes its methodology using textual explanations and diagrams (Figure 2), but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/caiyiqing/IF-CKT.
Open Datasets Yes We conduct extensive experiments on four datasets, including Shanghai Tech B dataset (Zhang et al. 2016b), World Expo 10 dataset (Zhang et al. 2016a), UCF-QNRF dataset (Idrees et al. 2018) and MALL dataset (Chen et al. 2012)
Dataset Splits No The paper defines source and target domains, but does not explicitly provide details about train/validation/test dataset splits, specific percentages, or sample counts used for data partitioning within these domains.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions optimizers (SGD, Adam) and network architectures (VGG-16) but does not provide specific version numbers for any software dependencies, such as programming languages or libraries (e.g., Python, PyTorch).
Experiment Setup Yes For fair comparisons with previous methods, we chose the first 13 layers from the VGG-16 (Simonyan and Zisserman 2015) network as the basic feature encoder E. ... The G is trained using the Stochastic Gradient Descent (SGD) optimizer with a learning rate as 10^6. We use Adam optimizer (Kingma and Ba 2015) with learning rate of 10^4 for the discriminators. ... γ1, γ2 and γ3 were set to 1, 0.3 and 1, respectively, by cross-validation.