Learning to Adapt via Latent Domains for Adaptive Semantic Segmentation

Authors: Yunan Liu, Shanshan Zhang, Yang Li, Jian Yang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental When evaluated on standard benchmarks, our method is superior to the state-of-the-art methods in both the single target and multiple-target domain adaptation settings.
Researcher Affiliation Academia Yunan Liu, Shanshan Zhang , Yang Li, Jian Yang PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {liuyunan, shanshan.zhang, yangli1995, csjyang}@njust.edu.cn
Pseudocode Yes Algorithm 1 Meta-Knowledge Learning for Single Target Domain Adaptation (STDA); Algorithm 2 Meta-Knowledge Learning for Multiple Target Domain Adaptation (MTDA)
Open Source Code No The paper does not provide an explicit statement or link for the public release of its source code.
Open Datasets Yes In our experiments, we evaluate the proposed method on both single-target and multiple-target settings of unsupervised domain adaptation. Specifically, we take the GTA5 dataset [1] as the labeled source domain, while the Cityscapes [35] and C-Driving [21] datasets are adopted as unlabeled single-target domain and multi-target domains, respectively.
Dataset Splits No The paper mentions data sampling for training and a 'query set' within the meta-learning framework, but it does not specify a distinct validation dataset split with percentages or sample counts for hyperparameter tuning or model selection in a traditional sense.
Hardware Specification Yes We implement our proposed methods using the Py Torch v1.2.0 on a single NVIDIA P40 GPU (24G memory).
Software Dependencies Yes We implement our proposed methods using the Py Torch v1.2.0 on a single NVIDIA P40 GPU (24G memory).
Experiment Setup Yes The hyper-parameter λ in Eq. 1 is set to 0.01. The value of J and N (Algorithm 1 and 2) are set to 2 in our experiments. We use the SGD optimizer with 0.9 momentum and 5 × 10−5 weight decay. The learning rates α, β, γ are empirically set to 1 × 10−4, 5 × 10−5, and 1 × 10−4.