Human-in-the-Loop Vehicle ReID

Authors: Zepeng Li, Dongxiang Zhang, Yanyan Shen, Gang Chen

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that even by interacting with flawed feedback generated by non-experts, IRIN still outperforms state-of-the-art Re ID models by a considerable margin.
Researcher Affiliation Academia Zepeng Li1, Dongxiang Zhang1*, Yanyan Shen2, Gang Chen1 1 Key Lab of Intelligent Computing Based Big Data of Zhejiang Province, Zhejiang University 2 Department of Computer Science and Engineering, Shanghai Jiao Tong University {lizepeng,zhangdongxiang,cg}@zju.edu.cn, shenyy@sjtu.edu.cn
Pseudocode Yes Algorithm 1: Vehicle Re-ID with iterative feedback
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes We use two popular vehicle Re ID benchmarks. Veri-776 (Liu et al. 2016b) contains 49, 357 images of 776 different vehicles, captured by 20 cameras in multiple viewpoints. Vehicle ID (Liu et al. 2016a) is a larger-scale dataset, with 221, 567 images and 26, 328 vehicles.
Dataset Splits No The paper mentions using Veri-776 and Vehicle ID datasets for training and evaluation, and specifies 'three test datasets in different scales (small, medium, and large)' for Vehicle ID. However, it does not explicitly provide details about a distinct validation dataset split (e.g., percentages, sample counts for validation set).
Hardware Specification Yes The model is implemented with Py Torch and trained on Tesla-V100 GPU.
Software Dependencies No The paper states 'The model is implemented with Py Torch' but does not specify the version number of PyTorch or any other software dependencies with their versions.
Experiment Setup Yes Following previous Re ID models, the input images are resized to 240 240 and augmented by random flipping, random padding and random erasing. The feature dimension is set to 2, 048. The model is trained with 120 epochs with a batch size of 128. SGD optimizer is employed with a momentum of 0.9 and the weight decay of 5e 4. Each batch contains 8 images per vehicle. The initial learning rate is set to 0.01 and linearly decayed to 0.0001.