Activation Modulation and Recalibration Scheme for Weakly Supervised Semantic Segmentation

Authors: Jie Qin, Jie Wu, Xuefeng Xiao, Lujun Li, Xingang Wang2117-2125

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that AMR establishes a new state-of-the-art performance on the PASCAL VOC 2012 dataset, surpassing not only current methods trained with the image-level of supervision but also some methods relying on stronger supervision, such as saliency label. Experiments also reveal that our scheme is plug-and-play and can be incorporated with other approaches to boost their performance. Our code is available at: https://github.com/jieqin-ai/AMR.
Researcher Affiliation Collaboration Jie Qin1,2,3*, Jie Wu2, Xuefeng Xiao2, Lujun Li3, Xingang Wang3 1 School of Artificial Intelligence, University of Chinese Academy of Sciences 2 Byte Dance Inc 3 Institute of Automation, Chinese Academy of Sciences
Pseudocode No The paper does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/jieqin-ai/AMR.
Open Datasets Yes We evaluate our approach on the PASCAL VOC2012 dataset (Everingham et al. 2015).
Dataset Splits Yes Following the common methods (Wei et al. 2017; Wang et al. 2020b), we use 10,582 images for training, 1,449 images for validation, and 1,456 ones for testing.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU model, CPU type) used for running the experiments.
Software Dependencies No The paper mentions using ResNet50 and Deep Lab-v2 backbones, but does not list specific version numbers for software dependencies such as Python, PyTorch, or CUDA.
Experiment Setup Yes We train the network for 8 epochs with a batch size of 16. The initial learning rate is set to 0.01 with a momentum of 0.9. We leverage the stochastic gradient descent algorithm for network optimization with a 0.0001 weight decay.