Pareto Domain Adaptation

Authors: fangrui lv, Jian Liang, Kaixiong Gong, Shuang Li, Chi Harold Liu, Han Li, Di Liu, Guoren Wang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on image classification and semantic segmentation benchmarks demonstrate the effectiveness of Pareto DA. Our code is available at https://github.com/BIT-DA/Pareto DA.
Researcher Affiliation Collaboration 1 Beijing Institute of Technology, China 2Alibaba Group, China 1 {fangruilv,kxgong,shuangli,wanggrbit}@bit.edu.cn, liuchi02@gmail.com 2 {xuelang.lj,lihan.lh,wendi.ld}@alibaba-inc.com
Pseudocode Yes Algorithm 1 Update Equations for Pareto DA
Open Source Code Yes Our code is available at https://github.com/BIT-DA/Pareto DA.
Open Datasets Yes Office-31 [40] is a typical benchmark for cross-domain object classification... Office-Home [48] is a more difficult benchmark... Vis DA-2017 [37] is a large-scale synthetic-to-real dataset... Cityscapes [8] is a real-world semantic segmentation dataset... GTA5 [38] includes 24,966 game screenshots...
Dataset Splits Yes In practice, we randomly split 10% data from the original target set as the validation set and the rest 90% are taken as the training set.
Hardware Specification No The paper does not provide specific hardware details used for running experiments (e.g., GPU/CPU models, memory details).
Software Dependencies No All experiments are implemented via Py Torch [36], and we adopt the common-used stochastic gradient descent (SGD) optimizer with momentum 0.9 and weight decay 1e-4 for all experiments. (No version numbers are provided for PyTorch or other software dependencies).
Experiment Setup Yes All experiments are implemented via Py Torch [36], and we adopt the common-used stochastic gradient descent (SGD) optimizer with momentum 0.9 and weight decay 1e-4 for all experiments.