Adversarial Support Alignment

Authors: Shangyuan Tong, Timur Garipov, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We quantitatively evaluate the method across domain adaptation tasks with shifts in label distributions. Our experiments show that the proposed method is more robust against these shifts than other alignment-based baselines.
Researcher Affiliation Collaboration Shangyuan Tong MIT CSAIL Timur Garipov MIT CSAIL Yang Zhang MIT-IBM Watson AI Lab Shiyu Chang UC Santa Barbara Tommi Jaakkola MIT CSAIL
Pseudocode Yes Algorithm 1 Our proposed ASA algorithm.
Open Source Code Yes We provide the code reproducing experiment results at https://github.com/timgaripov/asa.
Open Datasets Yes We use USPS (Hull, 1994) and MNIST (Le Cun et al., 1998) datasets for this adaptation problem. We use STL (Coates et al., 2011) and CIFAR-10 (Krizhevsky, 2009) for this adaptation task. We use train and validation sets of the Vis DA-17 challenge (Peng et al., 2017).
Dataset Splits Yes We use train and validation sets of the Vis DA-17 challenge (Peng et al., 2017).
Hardware Specification Yes The computational experiments presented in this paper were performed on Satori cluster developed as a collaboration between MIT and IBM.
Software Dependencies No The paper mentions 'Weights & Biases (Biewald, 2020)' but does not specify a version number for this or any other software dependencies.
Experiment Setup Yes We train all methods for 65 000 steps with batch size 64. We train the feature extractor, the classifier, and the discriminator with SGD (learning rate 0.02, momentum 0.9, weight decay 5 10 4).