On $f$-Divergence Principled Domain Adaptation: An Improved Framework

Authors: Ziqiao Wang, Yongyi Mao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results demonstrate the superior performance of f-DD-based learning algorithms over previous works in popular UDA benchmarks.
Researcher Affiliation Academia Ziqiao Wang Tongji University Shanghai, China ziqiaowang@tongji.edu.cn Yongyi Mao University of Ottawa Ottawa, Canada ymao@uottawa.ca
Pseudocode No The paper includes a diagram illustrating the training framework (Figure 2) but does not contain any formal pseudocode or algorithm blocks labeled 'Algorithm' or 'Pseudocode'.
Open Source Code Yes Our code is available at https://github.com/ZiqiaoWangGeothe/f-DD.
Open Datasets Yes We use three benchmark datasets: 1) the Office31 dataset [46]... 2) the Office-Home dataset [47]... and 3) two Digits datasets, MNIST and USPS [48]...
Dataset Splits No The paper states, 'We follow the splits and evaluation protocol established by [49], where MNIST and USPS have 60, 000 and 7, 291 training images, as well as 10, 000 and 2, 007 test images, respectively.' While it specifies training and test set sizes for Digits, it does not explicitly provide details for validation splits for any dataset.
Hardware Specification Yes All experiments are conducted on NVIDIA V100 (32GB) GPUs.
Software Dependencies No The paper mentions using specific models like 'pretrained Res Net-50' and 'Le Net' and frameworks like 'SGD with Nesterov Momentum', but it does not specify software versions for programming languages, libraries, or other dependencies (e.g., 'Python 3.x', 'PyTorch x.x').
Experiment Setup Yes Our f-DD is trained for 40 epochs using SGD with Nesterov Momentum, setting the momentum to 0.9, the learning rate to 0.004, and the batch size to 32. Particularly, on Office-31, we vary the trade-off parameter η for our KL-DD within [3, 4.5, 5.75], and for our χ2-DD within [1, 1.75, 2].