A Closer Look at Smoothness in Domain Adversarial Training
Authors: Harsh Rangwani, Sumukh K Aithal, Mayank Mishra, Arihant Jain, Venkatesh Babu Radhakrishnan
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively verify the empirical efficacy of SDAT over DAT across various datasets for classification (i.e., Domain Net, Vis DA-2017 and Office-Home) with Res Net and Vision Transformer (Dosovitskiy et al., 2020) (Vi T) backbones. We also show a prototypical application of SDAT in DA for object detection, demonstrating it s diverse applicability. |
| Researcher Affiliation | Collaboration | 1Video Analytics Lab, Indian Institute of Science, Bengaluru, India 2PES University, Bengaluru 3Amazon, India (Work done at Indian Institute of Science, Bengaluru). |
| Pseudocode | Yes | L. Py Torch Pseudocode for SDAT |
| Open Source Code | Yes | The source code used for experiments is available at: https://github.com/val-iisc/SDAT. |
| Open Datasets | Yes | We evaluate our proposed method on three datasets: Office Home, Vis DA-2017, and Domain Net... Office-Home (Venkateswara et al., 2017):... Domain Net (Peng et al., 2019):... Vis DA-2017 (Peng et al., 2017): |
| Dataset Splits | Yes | We split the target data into train and validation sets and report the best m AP on validation data. |
| Hardware Specification | Yes | All the above experiments were run on Nvidia V100, RTX 2080 and RTX A5000 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019)', 'Detectron2 (Wu et al., 2019)', and 'Wandb (Biewald, 2020)' with citations indicating their year of publication/release, but it does not provide explicit version numbers (e.g., PyTorch 1.9) for these software components. |
| Experiment Setup | Yes | We use a learning rate of 0.01 with batch size 32 in all of our experiments with Res Net backbone... The ρ value is set to 0.02 for the Office-Home experiments, 0.005 for the Vis DA-2017 experiments and 0.05 for the Domain Net experiments. ...We train it for a total of 30 epochs with 1000 iterations per epoch. The momentum parameter in SGD is set to 0.9 and a weight decay of 0.001 is used. |