Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval

Authors: Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Qian Li, Chao Shen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three datasets demonstrate that CA-TRIDE outperforms existing defense methods in both conventional and new metrics.
Researcher Affiliation Academia Qiwei Tian 1 Chenhao Lin 1 Zhengyu Zhao 1 Qian Li 1 Chao Shen 1 1Xi an Jiao Tong University, Xi an, China. Correspondence to: Chenhao Lin <linchenhao@xjtu.edu.cn>, Chao Shen <chaoshen@mail.xjtu.edu.cn>.
Pseudocode Yes Algorithm 1 Generating Adversarial Triplets in CA-TRIDE
Open Source Code Yes Codes are available at https://github.com/michaeltian108/CA-TRIDE.
Open Datasets Yes Evaluations are on three popular datasets in image retrieval tasks, i.e. CUB-200-2011 (Welinder et al., 2010), Cars196 (Krause et al., 2013), and SOP (Oh Song et al., 2016).
Dataset Splits No The paper describes a 'semi-hard sampling' strategy for mini-batch sampling and an epoch-wise adjustment to η, but it does not provide explicit train/validation/test dataset splits (e.g., percentages or sample counts) for reproduction.
Hardware Specification Yes We conducted 5 runs of HM and CA-TRIDE on the CUB dataset with an RTX3090 GPU.
Software Dependencies No The paper mentions using 'ADAM(Kingma & Ba, 2014) optimizer' and 'PGD (Madry et al., 2017)' but does not specify version numbers for these or any other software components or libraries used.
Experiment Setup Yes We train our models using ADAM(Kingma & Ba, 2014) optimizer with a 1.0 10 3 learning rate, a mini-batch size of 112, and training epochs of 100 under the above three datasets. For the top-rank pair, γ = 0.5 and the triplet margin in LT R βT R is 0.04. Adversarial perturbation is generated through PGD (Madry et al., 2017) with an optimization step α = 1/255, 16 iterations and clipped by an l norm of ϵ = 8/255.