Unsupervised Cross-Domain Image Retrieval via Prototypical Optimal Transport

Authors: Bin Li, Ye Shi, Qian Yu, Jingya Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Proto OT surpasses existing state-of-the-art methods by a notable margin across benchmark datasets. Notably, on Domain Net, Proto OT achieves an average P@200 enhancement of 24.44%, and on Office-Home, it demonstrates a P@15 improvement of 12.12%.Experiments Datasets We evaluate our proposed method on two datasets: Office Home and Domain Net.
Researcher Affiliation Academia 1Shanghai Tech University 2Beihang University
Pseudocode No The paper describes the method in prose and mathematical equations but does not include a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code is available at https://github.com/HCVLAB/Proto OT.
Open Datasets Yes We evaluate our proposed method on two datasets: Office Home and Domain Net. The Office-Home (Venkateswara et al. 2017) dataset comprises 4 domains (Art, Clipart, Product, Real) encompassing 65 categories. ... The Domain Net (Peng et al. 2019) dataset consists of 6 domains (Clipart, Infograph, Painting, Quickdraw, Real, and Sketch).
Dataset Splits No The paper states 'We employ all available images' for Office-Home and uses 7 categories for Domain Net, but does not provide specific percentages or counts for training, validation, or test splits. Evaluation metrics are mentioned, but not the dataset split details for reproduction.
Hardware Specification No The paper states 'We employ the Res Net-50(He et al. 2016) architecture as the encoder fθ' but does not provide any specific details about the hardware (GPU, CPU, memory, etc.) used for experiments.
Software Dependencies No Implementation of our framework is in Py Torch(Paszke et al. 2019). The specific version number for PyTorch or any other software dependency is not explicitly provided.
Experiment Setup Yes Our optimization employs the Adam optimizer with a learning rate of 2.5 10 4 over 200 epochs, with a batch size of 64. ... For the Sinkhorn Algorithm(Cuturi 2013), the entropic regularization coefficient ϵ is set to 0.05 and following (Caron et al. 2020) the iterations is 3. The number of prototypes corresponds to the number of classes in the training set: 65 for Office-Home and 7 for Domain Net.