Balancing Discriminability and Transferability for Source-Free Domain Adaptation
Authors: Jogendra Nath Kundu, Akshay R Kulkarni, Suvaansh Bhambri, Deepesh Mehta, Shreyas Anand Kulkarni, Varun Jampani, Venkatesh Babu Radhakrishnan
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We thoroughly assess our technique against numerous state-of-the-art methods in different DA scenarios. Datasets. We use four object classification DA benchmarks. Office-31 (Saenko et al., 2010) has three domains, 31 classes each: Amazon (A), DSLR (D), and Webcam (W). Office Home (Venkateswara et al., 2017) contains four domains, 65 classes each: Artistic (Ar), Clipart (Cl), Product (Pr), and Real-world (Rw). |
| Researcher Affiliation | Collaboration | 1Indian Institute of Science 2Google Research. |
| Pseudocode | Yes | Algorithm 1: = Mixup( , type) Algorithm 2: Integrating into typical SFDA training |
| Open Source Code | No | Towards reproducible research, we will publicly release our complete codebase and trained network weights. |
| Open Datasets | Yes | We use four object classification DA benchmarks. Office-31 (Saenko et al., 2010)... Office Home (Venkateswara et al., 2017)... Vis DA (Peng et al., 2018)... Domain Net (Peng et al., 2019)... For semantic segmentation DA, we use synthetic GTA5 (G) (Richter et al., 2016), SYNTHIA (Y) (Ros et al., 2016), Synscapes (S) (Wrenninge et al., 2018) as source datasets and real-world Cityscapes (Cordts et al., 2016) as the target data. |
| Dataset Splits | No | The paper mentions training, validation, and test sets, and uses standard benchmark datasets, but it does not explicitly provide the specific percentages, sample counts, or detailed methodology for how these splits were performed for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions several algorithms and tools by citing their respective papers (e.g., 'Adam optimizer (Kingma & Ba, 2014)', 'Jung et al. (2020) s frost (weather condition) augmentation'), but it does not provide specific version numbers for software libraries or dependencies (e.g., Python, PyTorch, TensorFlow versions) required for reproduction. |
| Experiment Setup | Yes | We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 1e-3, momentum of 0.9, and batch size of 64 for training with label smoothing following (Yang et al., 2021a; Liang et al., 2020). We empirically set λ = 0.1 for both edge and feature-mixup during both vendor-side and client-side training. We find that λ = 0.1 works well across all settings and tasks. |