Disentangled Feature Learning Network for Vehicle Re-Identification

Authors: Yan Bai, Yihang Lou, Yongxing Dai, Jun Liu, Ziqian Chen, Ling-Yu Duan

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments show the effectiveness of our method that achieves state-of-the-art performance on three challenging datasets. We conduct experiments on Vehicle ID [Liu et al., 2016a], Ve RI-776 [Liu et al., 2016c] and VERI-Wild [Lou et al., 2019b] datasets, which are widely used vehicle Re ID benchmarks.
Researcher Affiliation Academia 1 National Engineering Lab for Video Technology, Peking University, Beijing, China 2 ISTD Pillar, Singapore University of Technology and Design, Singapore 3 Peng Cheng Laboratory, Shenzhen, China
Pseudocode No The paper describes algorithms verbally but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code of its methodology.
Open Datasets Yes We conduct experiments on Vehicle ID [Liu et al., 2016a], Ve RI-776 [Liu et al., 2016c] and VERI-Wild [Lou et al., 2019b] datasets, which are widely used vehicle Re ID benchmarks.
Dataset Splits No The paper mentions 'In training stage' and 'During testing' but does not explicitly provide specific percentages or counts for training, validation, and test splits for the datasets used in a comprehensive manner for reproducibility. It only lists test sizes for some datasets.
Hardware Specification No The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Res Net50 [He et al., 2016]' as the backbone, but does not specify any software dependencies (e.g., Python version, deep learning framework, libraries) with version numbers.
Experiment Setup Yes Regarding parameters, we set ω as 0.5 and triplet margin as 0.6 in metric learning following [Lou et al., 2019b], and λ = 0.5 in hybrid ranking. The models are trained for 50 epochs. Learning rate starts from 0.003. The size of the input image is 256 256.