CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation

Authors: Ankit Singh

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have empirically shown that both of these modules complement each other to achieve superior performance. Experiments on three well-known domain adaptation benchmark datasets, namely Domain Net, Office-Home, and Office31, demonstrate the effectiveness of our approach. ... We perform extensive ablation experiments highlighting the role of different components of our framework.
Researcher Affiliation Academia Ankit Singh Department of Computer Science Indian Institute of Technology, Madras singh.ankit@cse.iitm.ac.in
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The link provided (https://github.com/Vision Learning Group/SSDA_MME) is for the data-splits and settings of a prior work (MME [46]), not for the open-source code of the CLDA methodology described in this paper.
Open Datasets Yes We evaluate the effectiveness of our approach on three different domain adaptation datasets: Domain Net [43], Office-Home [56] and Office31 [45].
Dataset Splits Yes For the fair comparison, we use the data-splits (train, validation, and test splits) released by [46] on Github 1.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments. It only mentions using Resnet34 and Alexnet as backbone networks and PyTorch.
Software Dependencies No All our experiments are performed using Pytorch [40]. (No version specified)
Experiment Setup Yes We use an identical set of hyperparameters (α = 4, β = 1 ) across all our experiments other than minibatch size. We have set τ = 5 in our experiments. Resnet34 experiments are performed with minibatch size, B = 32 and Alexnet models are trained with B = 24. We use µ = 4 for all our experiments. We use SGD optimizer with a momentum of 0.9 and an initial learning rate of 0.01 with cosine learning rate decay for all our experiments. Weight decay is set to 0.0005 for all our models.