Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
Authors: Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we prove a simple lower bound on the target domain error that complements the existing upper bound... We evaluate the effect of these attacks on popular UDA methods using benchmark datasets where they have been previously shown to be successful. Our results show that poisoning can significantly decrease the target domain accuracy... |
| Researcher Affiliation | Collaboration | 1Tulane University 2Lawrence Livermore National Laboratory 3IBM Research |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Our code can be found at https://github.com/akshaymehra24/Limits Of UDA. |
| Open Datasets | Yes | Two benchmark datasets are used in our experiments, namely Digits and Office-31... We evaluate four tasks using SVHN, MNIST, MNIST_M, and USPS datasets under Digits and six tasks under the Office-31 using Amazon (A), DSLR (D), and Webcam (W) datasets. The datasets are publicly available. |
| Dataset Splits | No | The paper states 'We use the same train/test splits as used in [9] [18] [27]' but does not explicitly mention a validation split. |
| Hardware Specification | Yes | All experiments are performed on a single NVIDIA V100 GPU. |
| Software Dependencies | Yes | We used Pytorch 1.4 |
| Experiment Setup | Yes | Batch size is set to 128 for all datasets. Adam optimizer is used with learning rate of 1e-4 for Digits and 1e-5 for Office-31. We train for 50 epochs. |