Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Towards Anytime Retrieval: A Benchmark for Anytime Person Re-Identification

Authors: Xulin Li, Yan Lu, Bin Liu, Jiaze Li, Qinhong Yang, Tao Gong, Qi Chu, Mang Ye, Nenghai Yu

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our model leads to satisfactory results and exhibits excellent generalization to all scenarios.
Researcher Affiliation Academia 1School of Cyber Science and Technology, University of Science and Technology of China 2Anhui Province Key Laboratory of Digital Security 3The Chinese University of Hong Kong 4School of Computer Science, Wuhan University, China
Pseudocode No The paper describes methods using mathematical formulations and diagrams (e.g., Figure 4) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a GitHub link (1https://github.com/kw66/AT-Re ID) in the context of describing the AT-USTC dataset. However, it does not explicitly state that the source code for the proposed Uni-AT model or methodology is released or available at this link. The link is presented as a reference for the dataset.
Open Datasets Yes We collect the first corresponding large-scale dataset named AT-USTC1, which contains 135k images and covers all six scenarios in AT-Re ID. Our data collection spans an entire year... 1https://github.com/kw66/AT-Re ID.
Dataset Splits Yes We have a fixed split of the dataset into training and testing sets. The training set consists of 135 people with 109,183 images, and the testing set consists of another 135 people with 26,510 images. We partitioned 20% images from the training set for validation purposes.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It mentions general implementation details but no hardware.
Software Dependencies No The paper mentions using a ViT-Base model, BNNeck, and SGD optimizer, but does not provide specific version numbers for any software libraries, frameworks (like PyTorch or TensorFlow), or other dependencies.
Experiment Setup Yes The image is fed into a Multi Scenario Re ID (MS-Re ID) framework... We choose Vision Transformer (ViT) [Dosovitskiy et al., 2020] as our backbone... BNNeck before the classifier. All person images are resized to 256 128 and are augmented with random horizontal flipping, padding, random cropping, and random erasing [Zhong et al., 2020] in training. The batch size is set to 64 with 8 identities. The whole model is trained for 120 epochs (24K iterations) with the SGD optimizer. The learning rate is initialized as 0.008 with the warm-up scheme and cosine learning rate decay. The hyper-parameter k and γ are both set to 1.