Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Adaptive Uncertainty-Based Learning for Text-Based Person Retrieval
Authors: Shenshen Li, Chen He, Xing Xu, Fumin Shen, Yang Yang, Heng Tao Shen
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our AUL method consistently achieves state-of-the-art performance on three benchmark datasets in supervised, weakly supervised, and domain generalization settings. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering and Center for Future Media, University of Electronic Science and Technology of China, China |
| Pseudocode | No | The paper describes its method using mathematical formulations and descriptive text, but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/CFM-MSG/Code-AUL. |
| Open Datasets | Yes | We evaluate our model on three benchmark datasets, including: 1) CUHK-PEDES (Li et al. 2017)... 2) ICFG-PEDES (Ding et al. 2021)... 3) RSTPReid (Zhu et al. 2021)... |
| Dataset Splits | No | The paper explicitly mentions training and test sets for the ICFG-PEDES dataset but does not explicitly detail a separate validation set split or its use. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions PyTorch as the implementation framework but does not provide specific version numbers for it or any other software libraries used. |
| Experiment Setup | Yes | Then we resize the image to 384 128 and set the length for each textual token sequence to 56. Initialed by parameters of the ο¬rst stage, we trained our AUL model with Py Torch for 35 epochs using the Adam optimizer (Kingma and Ba 2015) with a learning rate initialed by 5e-5 and decayed to 5e-6 following a linear learning rate decay. The batch size is set as 128. Finally, Ξ» and Ξ³0 are set to 0.8 and 1.0 for all experiments. |