DiDA: Disambiguated Domain Alignment for Cross-Domain Retrieval with Partial Labels

Authors: Haoran Liu, Ying Ma, Ming Yan, Yingke Chen, Dezhong Peng, Xu Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of Di DA through comprehensive experiments on three benchmarks, comparing it to existing state-of-the-art methods.
Researcher Affiliation Academia 1College of Computer Science, Sichuan University, Chengdu, China 2National Innovation Center for UHD Video Technology, Chengdu, China 3Faculty of Computing, Harbin Institute of Technology, Harbin, China 4Centre for Frontier AI Research (CFAR), A*STAR, Singapore 5Department of Computer and Information Sciences, Northumbria University, UK
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code available: https://github.com/lhrrrrrr/Di DA.
Open Datasets Yes We conduct extensive comparison experiments on three crossdomain benchmark datasets, i.e., Office31 (Saenko et al. 2010), Office Home (Venkateswara et al. 2017) and Image CLEF (Long et al. 2017).
Dataset Splits No The above datasets are randomly partitioned into training sets and testing sets in an 80-20 ratio.
Hardware Specification Yes All the experiments are carried out using Py Torch with two Nvidia Ge Force RTX 3090 GPUs.
Software Dependencies No All the experiments are carried out using Py Torch with two Nvidia Ge Force RTX 3090 GPUs. (PyTorch is mentioned, but without a specific version number).
Experiment Setup Yes In Di DA, we utilize the Res Net-50 network as the encoder and initialize it with parameters pre-trained in Image Net. Note that, the last fully connected layer is substituted by a 512-D randomly initialized linear layer and the output features are l2-normalized. Meanwhile, the classifier consists of a linear layer and is initialized by the Xavier initialization method (Glorot and Bengio 2010). Furthermore, we adopt the Stochastic Gradient Descent (SGD) optimizer with a momentum of 0.9 and set the learning rate to 0.003 and 0.0001 for the encoder and classifier respectively. For a fair comparison, the batch size is set to 16 and the total epochs are 50.