Bootstrapping Semantic Segmentation with Regional Contrast
Authors: Shikun Liu, Shuaifeng Zhi, Edward Johns, Andrew Davison
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Re Co in a semi-supervised setting, with two different modes: i) Partial Dataset Full Labels a sparse subset of training images, where each image has full ground-truth labels, and the remaining images are unlabelled; ii) Partial Labels Full Dataset all images have some labels, but covering only a sparse subset of pixels within each image. In both settings, we show that Re Co can consistently improve performance across all methods and datasets. ... We experiment on segmentation datasets: Cityscapes (Cordts et al., 2016) and Pascal VOC 2012 (Everingham et al., 2015)... |
| Researcher Affiliation | Collaboration | Shikun Liu1, Shuaifeng Zhi1, Edward Johns2, and Andrew J. Davison1 1Dyson Robotics Lab, Imperial College London 2Robot Learning Lab, Imperial College London |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https: //github.com/lorenmt/reco. |
| Open Datasets | Yes | We experiment on segmentation datasets: Cityscapes (Cordts et al., 2016) and Pascal VOC 2012 (Everingham et al., 2015) in both partial and full label setting. We also evaluate on a more difficult indoor scene dataset SUN RGB-D (Song et al., 2015)... |
| Dataset Splits | Yes | Table 1 shows the mean Io U validation performance on three datasets over three individual runs (different labelled and unlabelled data splits). The number of labelled images shown in the three columns for each dataset, are chosen such that the least-appeared classes have appeared in 5, 15 and 50 images respectively. ... We additionally re-organised the original training and validation split in SUN RGB-D dataset from 5285 and 5050 to 9860 and 475 samples respectively... |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU or CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'SGD optimiser' and 'Deep Lab V3+' with 'Res Net-101 backbone' and libraries like 'Sci Py', but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | We trained all methods with SGD optimiser with learning rate 2.5 10 3, momentum 0.9, and weight decay 5 10 4. We adopted the polynomial annealing policy to schedule the learning rate, which is multiplied by (1 iter total iter)power with power = 0.9, and trained for 40k iterations for all datasets. ... In our Re Co framework, we sampled 256 query samples and 512 key samples and used temperature τ = 0.5 for each mini-batch... The dimensionality for pixel-level representation was set to m = 256. The confidence thresholds were set to δw = 0.7 and δs = 0.97. |