Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation
Authors: Guoliang Kang, Yunchao Wei, Yi Yang, Yueting Zhuang, Alexander Hauptmann
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results on two representative domain adaptation benchmarks, i.e. GTAV Cityscapes and SYNTHIA Cityscapes, verify the effectiveness of our proposed method and demonstrate that our method performs favorably against previous state-of-the-arts. |
| Researcher Affiliation | Academia | Guoliang Kang1, Yunchao Wei2 , Yi Yang2, Yueting Zhuang3, Alexander G. Hauptmann1 1 School of Computer Science, Carnegie Mellon University 2 Re LER, University of Technology Sydney 3 Zhejiang University |
| Pseudocode | No | The paper describes the methodology using prose and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/kgl-prml/Pixel Level-Cycle-Association. |
| Open Datasets | Yes | We validate our method on two representative domain adaptive semantic segmentation benchmarks, i.e. transferring from the synthetic images (GTAV or SYNTHIA) to the real images (Cityscapes). Both GTAV and SYNTHIA are synthetic datasets. The Cityscapes consists of high quality street scene images captured from various cities. We adopt the Image Net [14] pretrained weights to initialize the backbone. |
| Dataset Splits | Yes | The dataset is split into a training set with 2,975 images and a validation set with 500 images. For all of the tasks, the images in the Cityscapes training set are used to train the adaptation model, and those in the validation set are employed to test the model s performance. |
| Hardware Specification | No | The paper mentions using Deep Lab-V2 with ResNet-101 as the backbone but does not provide specific details about the hardware (e.g., GPU/CPU models) used for training or inference. |
| Software Dependencies | No | The paper mentions using Deep Lab-V2, ResNet-101, and stochastic gradient descent (SGD), but does not provide specific version numbers for any software libraries or frameworks used. |
| Experiment Setup | Yes | For the training, we train our model based on stochastic gradient descent (SGD) with momentum of 0.9 and weight decay of 5 10 4. We employ the poly learning rate schedule with the initial learning rate at 2.5 10 4. We train about 11K and 3K iterations with batch size of 8, for GTAV Cityscapes and SYNTHIA Cityscapes respectively. In all of our experiments, we set β1, β2, β3 to 0.75, 0.1, 0.01. We set λ to 10.0 which is equivalent to restricting the highest predicted probability among classes at around 0.999. |