SJDL-Vehicle: Semi-supervised Joint Defogging Learning for Foggy Vehicle Re-identification
Authors: Wei-Ting Chen, I-Hsiang Chen, Chih-Yuan Yeh, Hao-Hsiang Yang, Jian-Jiun Ding, Sy-Yen Kuo347-355
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed method is effective and outperforms other existing vehicle Re ID methods in the foggy weather. |
| Researcher Affiliation | Collaboration | Wei-Ting Chen1,3 , I-Hsiang Chen2 , Chih-Yuan Yeh2, Hao-Hsiang Yang2, Jian-Jiun Ding2, and Sy-Yen Kuo2 1Graduate Institute of Electronics Engineering, National Taiwan University, Taiwan 2Department of Electrical Engineering, National Taiwan University, Taiwan 3ASUS Intelligent Cloud Services, Taiwan {f05943089, f09921058, f09921063, r05921014, jjding, sykuo}@ntu.edu.tw |
| Pseudocode | No | The paper describes the proposed architecture and processes in prose but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and dataset are available in https://github.com/Cihsaing/SJDLFoggy-Vehicle-Re-Identification--AAAI2022. |
| Open Datasets | Yes | Due to the lack of a dataset specialized for vehicle Re ID in the foggy weather, we construct a dataset called FVRID which consists of real-world and synthetic foggy images to train and evaluate the performance. The code and dataset are available in https://github.com/Cihsaing/SJDLFoggy-Vehicle-Re-Identification--AAAI2022. |
| Dataset Splits | No | The paper provides details for 'Train', 'Probe', and 'Gallery' sets, but does not explicitly mention or detail a separate 'validation' dataset split for hyperparameter tuning or early stopping during training. 'Probe' is used for evaluation/testing. |
| Hardware Specification | Yes | The network is trained on an Nvidia Tesla V100 GPU for 20 hours and we implement it on the Pytorch platform. |
| Software Dependencies | No | The paper states 'we implement it on the Pytorch platform' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The input image is resized to 384 x 384 and the training batch size Q is set to 36. We apply horizontal flip and random crop to prevent the overfitting problem due to the limited number of training data. We train models for 120 epochs with a warm-up strategy. The initial learning rate is 1.09 x 10^-5, which increases to 10^-4 after the 10th epoch. The Adam optimizer is adopted to optimize the model with a decay rate of 0.6. The hyper-parameters λ1, λ2, λ3, and λ4 are set as to 1, 10^-5, 10^-5, and 300. |