Cluster-Aware Similarity Diffusion for Instance Retrieval
Authors: Jifei Luo, Hantao Yao, Changsheng Xu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations in instance retrieval and object re-identification validate the effectiveness of the proposed CAS, our code is publicly available. ... We conduct experiments on instance retrieval and object re-identification (Re ID) tasks to verify the effectiveness of our proposed method. For instance retrieval, the widely known Oxford5k (Philbin et al., 2007) and Paris6k (Philbin et al., 2008) datasets are being revisited by (Radenovi c et al., 2018), referred to as Revisited Oxford5k (ROxf) and Revisited Paris6k (RPar), respectively. ... The mean Average Precision (m AP) metric is used for measuring the performance of instance retrieval, and the assessment includes reporting results for Easy (E), Medium (M), and Hard (H) protocols on the ROxf and RPar datasets, respectively. For object Re ID, we also consider Rank1 (CMC@1) and mean Inverse Negative Penalty (m INP) (Ye et al., 2022). |
| Researcher Affiliation | Academia | 1University of Science and Technology of China, Hefei, China 2Institute of Automation, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences. |
| Pseudocode | Yes | Algorithm 1 Efficient Iterative Solution for Bidirectional Similarity Diffusion Process |
| Open Source Code | Yes | Evaluations in instance retrieval and object re-identification validate the effectiveness of the proposed CAS, our code is publicly available. |
| Open Datasets | Yes | We conduct experiments on instance retrieval and object re-identification (Re ID) tasks to verify the effectiveness of our proposed method. For instance retrieval, the widely known Oxford5k (Philbin et al., 2007) and Paris6k (Philbin et al., 2008) datasets are being revisited by (Radenovi c et al., 2018), referred to as Revisited Oxford5k (ROxf) and Revisited Paris6k (RPar), respectively. Moreover, a collection of 1 million distractor images are added to form large scale ROxf+1M and RPar+1M datasets. We also evaluate our method on object Re ID datasets, including Market1501 (Zheng et al., 2015) and CUHK03 (Li et al., 2014). |
| Dataset Splits | No | The paper mentions using datasets for training and evaluation but does not provide specific details on how these datasets were split into training, validation, and testing sets (e.g., percentages or sample counts for each split). While it refers to 'training' models on certain datasets, explicit validation set information is not provided. |
| Hardware Specification | Yes | The re-ranking latency is tested with one single RTX 3090. |
| Software Dependencies | No | The paper mentions various deep learning models and re-ranking methods but does not provide specific software dependency versions (e.g., Python, PyTorch, TensorFlow versions or library versions like scikit-learn with specific numbers). |
| Experiment Setup | Yes | The hyper-parameters k1 and k2 are used to approximate the local cluster C and neighbor set ξ, respectively. As shown in Figure 4, setting the hyper-parameter of the traditional diffusion-based method as k1, our approach exhibits its robustness in comparison. The further analysis of k2 is shown in Table 8. ... In Eq. (18), the hyper-parameter ω is treated as a balance weight to fuse the original Euclidean distance and the Jensen-Shannon divergence. As shown in Table 9, the optimal result is attained when ω is set to 0.2 and 0.3. ... The regularization term is weighted by the hyperparameter µ and the semi-positive matrix E is introduced to prevent the objective similarity matrix F from becoming excessively smooth. ... The regularization term weighted by β can constrain the value of ˆ F i from deviating too much from the initial value F , we set β with a low value to achieve better numerical stability. |