Retrieval Across Any Domains via Large-scale Pre-trained Model

Authors: Jiexi Yan, Zhihui Yin, Chenghao Xu, Cheng Deng, Heng Huang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on several benchmark datasets demonstrate the superiority of our method. [...] Extensive experiments on several cross-domain datasets are conducted to analyze our TKI.
Researcher Affiliation Academia 1School of Computer Science and Technology, Xidian University, Xi an, Shaanxi, China 2School of Electronic Engineering, Xidian University, Xi an, Shaanxi, China 3Department of Computer Science, University of Maryland College Park, USA.
Pseudocode No The paper describes the proposed method in detail and provides figures but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We conduct experiments on three cross-domain benchmarks, i.e., Domain Net (Peng et al., 2019), PACS (Li et al., 2017), and Office-Home (Venkateswara et al., 2017).
Dataset Splits No The paper states: 'Within the challenging data-free cross-domain retrieval task, we do not exploit any actual data for training. The actual images in datasets are only used for test.' This indicates that conventional training/validation splits of image data are not used or specified for the main model, as it is a data-free approach.
Hardware Specification Yes Our approach is implemented in Py Torch and trained with an NVIDIA A6000 GPU.
Software Dependencies No The paper mentions 'PyTorch' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We train our model by Adam (Kingma & Ba, 2015) optimizer with the same hyperparameters (learning rate, τ, and λ are set as 0.005, 0.1, and 1, respectively) in all experiments.