Progressive Bayesian Inference for Scribble-Supervised Semantic Segmentation
Authors: Chuanwei Zhou, Chunyan Xu, Zhen Cui
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive evaluations of several benchmark datasets demonstrate the effectiveness and superiority of our proposed PBI when compared with other state-of-the-art methods applied to the scribble-supervised semantic segmentation task. ... Comprehensive evaluations of several benchmark datasets demonstrate the effectiveness and superiority of our proposed PBI when compared with other state-of-the-art methods applied to the scribble-supervised semantic segmentation task. ... Extensive experiments have demonstrated that our proposed PBI could boost the performance of the scribble-supervised semantic segmentation and state-of-the-art segmentation performances have been achieved in standard benchmarks. ... We conduct extensive experiments to validate the effectiveness of the proposed method on the scribble-supervised semantic segmentation task and report state-of-the-art performances on the PASCAL VOC 2012 dataset (Everingham et al. 2010) and the PASCAL Context dataset (Hariharan et al. 2011). ... All ablation studies are conducted on the PASCAL VOC 2012 dataset. |
| Researcher Affiliation | Academia | Chuanwei Zhou, Chunyan Xu*, Zhen Cui PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China. {cwzhou, cyx, zhen.cui}@njust.edu.cn |
| Pseudocode | No | The paper includes a diagram of the framework and mathematical equations, but no structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Following the standard protocol (Zhang et al. 2021b; Xu et al. 2021b; Pan et al. 2021), we utilize the PASCAL VOC 2012 semantic segmentation dataset (Everingham et al. 2010) and the PASCAL Context dataset (Mottaghi et al. 2014) to evaluate our proposed PBI framework. |
| Dataset Splits | Yes | Following the standard protocol (Zhang et al. 2021b; Xu et al. 2021b; Pan et al. 2021), we utilize the PASCAL VOC 2012 semantic segmentation dataset (Everingham et al. 2010) and the PASCAL Context dataset (Mottaghi et al. 2014) to evaluate our proposed PBI framework. ... The whole framework is trained for 200 epochs with a batch size of 8. For the first 100 epochs, the sampling region V remains in the original scribbles S. Starting from the 100-th epoch, we progressively expand the sampling region V with a radius r of 21 every 20 epochs. ... Comparison with state-of-the-art methods on the PASCAL VOC 2012 validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | All the experiments are implemented with the Py Torch framework (Paszke et al. 2019). |
| Experiment Setup | Yes | The SGD optimizer with momentum and weight decay being 0.9 and 5e-4 is adopted as the optimizer Ωto train the networks. The learning rate is initially set to 1e-4 and then slowly decayed with a poly schedule, and the whole framework is trained for 200 epochs with a batch size of 8. For the first 100 epochs, the sampling region V remains in the original scribbles S. ... The Gaussian mixture number K is set to 3 empirically. In the optimization process, we implement random augmentations including scaling ([0.5, 2.0]), flipping (p=0.5), rotation ([ 10, 10]) and cropping (512 512). |