OVL: One-View Learning for Human Retrieval
Authors: Wenjing Li, Zhongcheng Wu11410-11417
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three large-scale datasets demonstrate the advantage of the proposed method over state-of-the-art domain adaptation and semi-supervised methods. |
| Researcher Affiliation | Academia | 1High Magnetic Field Laboratory, Chinese Academy of Sciences 2University of Science and Technology of China |
| Pseudocode | No | The paper describes the proposed method in text and provides a diagram (Figure 3) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate the proposed method on three large-scale person re-ID benchmarks: Market-1501 (Zheng et al. 2015), Duke MTMC-re ID (Ristani et al. 2016; Zheng, Zheng, and Yang 2017) and MSMT17 (Wei et al. 2018). |
| Dataset Splits | No | The paper mentions training data, labeled and unlabeled samples, and testing, but it does not provide specific details about train/validation/test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions using "Res Net-50" as the backbone but does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper mentions the use of "Res Net-50" and "SGD optimizer" but does not specify any software versions for libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | We resize the input image to 256 128. The random flipping and random cropping are applied for data augmentation during training. We initialize the learning rate to 0.01 for the generator and 0.1 for the classifiers. The learning rate is divided by 10 after 40 epochs. The batch size is set to 64 for both source and target views. The SGD optimizer is used to train the network in total of 60 epochs. In default, we set λvi = 0.2 and λun = 0.1. |