Boosting Semi-Supervised Semantic Segmentation with Probabilistic Representations

Authors: Haoyu Xie, Changqi Wang, Mingkai Zheng, Minjing Dong, Shan You, Chong Fu, Chang Xu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct sufficient experiment to evaluate PRCL on Pascal VOC and City Scapes to demonstrate its superiority.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Northeastern University, Shenyang, China 2School of Computer Science, Faculty of Engineer, The University of Sydney, Sydney, Australia 3Sense Time Research, Beijing, China
Pseudocode Yes whose pseudo-code is shown in Appendix A.
Open Source Code Yes The code is available at https://github.com/Haoyu-Xie/PRCL.
Open Datasets Yes We conduct experiments on Pascal VOC 2012 (Everingham et al. 2010) dataset and City Scapes (Cordts et al. 2016) dataset to test the effectiveness of PRCL.
Dataset Splits Yes The Pascal VOC 2012 contains 1464 well-annotated images for training and 1449 images for validation originally. ... City Scapes is an urban scene dataset which includes 2975 training images and 500 validation images. ... All results are measured on the val set in both Pascal VOC and City Scapes.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using "Deep Labv3+" and "Res Net-101" but does not provide specific software dependency versions (e.g., library or framework names with version numbers like PyTorch 1.9).
Experiment Setup Yes We adjust PRCL contribution with a loss scheduler. Mathematically, given the total training epochs Ttotal and the initial weight λc(0), the weight λc at the t-th epoch can be calculated as, λc(t) = λc(0) exp(α ( t Ttotal )2) where α is a negative constant, which determines the rate of weight descent. ... In Lu, we only consider the pixels whose predicted confidence ˆyq (the maximum of prediction after Soft Max operation) are greater than δu, and λu is defined as the percentage of them... we set a threshold δw for sampling valid representations. ... we set a threshold δs for ˆyq and randomly sample a suitable number of hard anchors whose ˆyq are below δs.