QI-IRA: Quantum-Inspired Interactive Ranking Aggregation for Person Re-identification

Authors: Chunyu Hu, Hong Zhang, Chao Liang, Hao Huang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comparative experiments conducted on six public re-ID datasets validate the superiority of the proposed QI-IRA method over existing unsupervised, interactive, and fully-supervised RA approaches. We conduct extensive comparative experiments on four image and two video re-ID datasets, on which QI-IRA achieves consistent superiority over unsupervised, fullysupervised and interactive RA baselines.
Researcher Affiliation Academia Chunyu Hu1*, Hong Zhang2,3*, Chao Liang1,2,3,4 , Hao Huang1 1School of Computer Science, Wuhan University, China 2School of Cyber Science and Engineering, Wuhan University, China 3National Engineering Research Center for Multimedia Software, Wuhan University, China 4Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China
Pseudocode Yes Algorithm 1: QI-IRA Input: Query q and basic ranking scores S = {sm}M m=1 Output: Aggregated ranking scores s 1: Adopt Mean RA method to generate initial ranking 2: while not a satisfied ranking do 3: Collect Ψq + and Ψq using relevance feedback 4: Calculate new fusion weights { wt m}M m=1 by Eq. 5 5: Update the fusion weights {wt m}M m=1 by Eq. 6 6: end while 7: Compute aggregated ranking scores s by Eq. 4
Open Source Code No The paper mentions original codes used for comparison methods (Lambda MART, Lambda Rank, Rank Net) and provides GitHub links for them in footnotes, but it does not provide an explicit statement or link for the source code of its own proposed method (QI-IRA).
Open Datasets Yes We evaluate our method on four image re-ID datasets (Market1501 (Zheng et al. 2015), Duke MTMC-re ID (Ristani et al. 2016) and CUHK03 detected and labeled (Li et al. 2014)), and two video re-ID datasets (MARS (Zheng et al. 2016) and Duke MTMC-Video Re ID (Wu et al. 2018)), which are all popular benchmarks evaluated in various person re-ID studies. We use official training sets to train basic re-ID and fully-supervised RA methods, and use official test sets to evaluate all RA methods.
Dataset Splits No The paper mentions using "official training sets" and "official test sets" but does not explicitly describe validation dataset splits, proportions, or specific methodologies for setting up validation sets. It references standard benchmarks, which often have predefined splits, but the paper itself does not detail a validation split.
Hardware Specification No The paper states: "The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of Wuhan University." This is a general statement about the computing environment but does not provide specific hardware details like GPU models, CPU types, or memory.
Software Dependencies No The paper mentions using "original codes directly in our experiments" for other methods (Lambda MART, Lambda Rank, Rank Net) and provides links to their GitHub repositories, but it does not specify version numbers for these or any other software dependencies (e.g., Python, PyTorch versions) used for their own implementation of QI-IRA.
Experiment Setup Yes We use QI-IRA(K, T) to represent our method, where K denotes the number of feedback samples in each interaction round and T denotes the maximum number of interaction rounds. So K T is the total number of feedback samples. In the weight adjustment step of QI-IRA, the fusion factor γ is set to 0.7 for the weight of the new round when fusing the weights calculated from two adjacent interaction rounds.