Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning

Authors: Hanzhe Hu, Fangyun Wei, Han Hu, Qiwei Ye, Jinshi Cui, Liwei Wang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks under various data partition protocols.
Researcher Affiliation Collaboration Hanzhe Hu1,4 Fangyun Wei2 Han Hu2 Qiwei Ye2 Jinshi Cui1 Liwei Wang1,3 1Key Laboratory of Machine Perception (MOE), School of EECS, Peking University 2Microsoft Research Asia 3Institute for Artificial Intelligence, Peking University 4Zhejiang Lab
Pseudocode No The paper describes methods in narrative text and uses figures for illustration, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/hzhupku/SemiSeg-AEL.
Open Datasets Yes Cityscapes [1] dataset is designed for urban scene understanding. PASCAL VOC 2012 [33] dataset is a standard object-centric semantic segmentation dataset. ADE20K dataset [35] is a large scale scene parsing benchmark...
Dataset Splits Yes The finely annotated 5, 000 images are split into 2, 975, 500 and 1, 525 images for training, validation and testing respectively. (Cityscapes) The strand training, validation and testing sets consist of 1, 464, 1, 449 and 1, 556 images, respectively. (PASCAL VOC 2012) The dataset includes 20K/2K/3K images for training, validation and testing. (ADE20K)
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions the use of ResNet-101 and DeepLabv3+ architectures, but does not provide specific version numbers for software dependencies such as deep learning frameworks or libraries.
Experiment Setup Yes We use stochastic gradient descent (SGD) optimizer with initial learning rate 0.01, weight decay 0.0005 and momentum 0.9. Moreover, we adopt the poly learning rate policy, where the initial learning rate is multiplied by (1 iter max_iter)0.9. We adopt the crop size as 769 769, batch size as 16 and training iterations as 18k.