Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification

Authors: Zheng Wang, Mang Ye, Fan Yang, Xiang Bai, Shin'ichi Satoh

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations on two simulated datasets and one public dataset demonstrate the advantages of our method over related state-of-the-art methods.
Researcher Affiliation Collaboration 1 National Institute of Informatics, Japan 2 Hong Kong Baptist University, China 3 The University of Tokyo, Japan 4 Huazhong University of Science of Technology, China
Pseudocode No The paper includes architectural diagrams (e.g., Figure 2) but no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating the availability of its source code.
Open Datasets Yes Following [Wang et al., 2016b], the evaluation is run on two simulated person datasets SALR-VIPe R and SALR-PRID, which are based on the VIPe R dataset [Gray et al., 2007] and the PRID450S dataset [Roth et al., 2014] respectively, and the public CAVIAR dataset [Cheng et al., 2011].
Dataset Splits Yes Following [Wang et al., 2016b], all datasets are randomly divided into training set and testing set. Persons for training and testing are respectively 532 and 100 (SALR-VIPe R), 400 and 50 (SALR-PRID), and 44 and 10 (CAVIAR).
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions deep learning models like ResNet-50 and VGG network but does not specify any software libraries or their version numbers (e.g., TensorFlow, PyTorch, Python versions).
Experiment Setup Yes The training process includes the following three steps: (1) We first initialize the re-identification network separately. We choose Res Net-50 [He et al., 2016] as the base. The Res Net50 is pre-trained with Image Net [Russakovsky et al., 2015], and then fine-tuned with the Market-1501 [Zheng et al., 2015] dataset. (2) The cascaded generator networks are initialized with MSE losses. (3) The whole network is trained simultaneously with all the losses. ... following [Ledig et al., 2016], we set α = 2 * 10^-6 and β = 1 * 10^-3.