Continual Semantic Segmentation Leveraging Image-level Labels and Rehearsal

Authors: Mathieu Pagé Fortin, Brahim Chaib-draa

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on Pascal-VOC by varying the proportion of fullyand weakly-supervised data in various setups and show that our contributions consistently improve the m Io U on both past and novel classes.
Researcher Affiliation Academia Mathieu Pag e Fortin , Brahim Chaib-draa Laval University, Qu ebec, Canada mathieu.page-fortin.1@ulaval.ca brahim.chaib-draa@ift.ulaval.ca
Pseudocode No The paper describes the approach and steps in the main text and through figures, but it does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of source code or a link to a code repository.
Open Datasets Yes We train and evaluate our model in various continual learning scenarios built from the commonly used PASCAL-VOC 2012 dataset [Everingham et al., 2015].
Dataset Splits Yes This dataset contains a train split of 10,582 images and a val split of 1,449 images used for testing. ... the hyper-parameters are searched by keeping 20% of the training set for validation.
Hardware Specification Yes We train our models with SGD with momentum on 4 Nvidia A100 GPUs with a total batch size of 24 for 30 epochs for each step.
Software Dependencies No The paper mentions several components like Deeplab-v3, ResNet-101, and ImageNet, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We train our models with SGD with momentum on 4 Nvidia A100 GPUs with a total batch size of 24 for 30 epochs for each step. An initial learning rate of 0.01 and 0.001 are used for the first and subsequent steps, respectively, with a polynomial decay of power 0.9. ...Additionally, with weaklysupervised data a threshold of 0.75 is used to only keep the most confident predictions as pseudo-labels. The weights for distillation losses are 100 in the 19-1 and 15-1 scenarios, and 10 for the 15-5 scenarios.