Unsupervised Domain Adaptation with a Relaxed Covariate Shift Assumption

Authors: Tameem Adel, Han Zhao, Alexander Wong

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the model on the Amazon reviews and the CVC pedestrian detection datasets.
Researcher Affiliation Academia Tameem Adel University of Manchester, UK tameem.hesham@gmail.com Han Zhao Carnegie Mellon University, USA han.zhao@cs.cmu.edu Alexander Wong University of Waterloo, Canada a28wong@uwaterloo.ca
Pseudocode Yes Algorithm 1 Generative Domain Adaptation Algorithm (Gen DA)
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes The dataset used in this experiment is the Amazon reviews dataset (Blitzer, Dredze, and Pereira 2007), which represents one of the benchmark domain adaptation datasets. As source labeled data, we use the CVC-04 virtual-world pedestrian dataset (Vazquez et al. 2014), which consists of 1208 virtual pedestrians and 1220 (a subset of the available 6828) pedestrian-free cropped images. As target unlabeled data, where the labels are used only to assess accuracy at the end but never in learning, we use the CVC-02 real-world dataset (Geronimo et al. 2010).
Dataset Splits No The paper specifies the number of labeled source instances (2,000) and unlabeled target instances (2,000) used for training and learning, but it does not provide details on validation splits, such as specific percentages, sample counts for validation, or a cross-validation setup.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions using "SVMlight (Joachims 1999a; 1999b)" for implementation, but it does not provide specific version numbers for this or any other software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup No The paper describes the use of multilayer perceptrons (MLPs) with two hidden layers and softplus activation function, and that gradients are computed using stochastic gradient descent (SGD) and Ada Grad. It also mentions training with minibatches. However, specific hyperparameters such as learning rates, batch sizes, number of epochs, or other detailed training configurations are not provided.