SimCS: Simulation for Domain Incremental Online Continual Segmentation

Authors: Motasem Alfarra, Zhipeng Cai, Adel Bibi, Bernard Ghanem, Matthias Müller

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Sim CS provides consistent improvements when combined with different CL methods. We analyze several existing continual learning methods and show that they perform poorly in this setting despite working well in class-incremental segmentation. We benchmark regularization-based methods (Kirkpatrick et al. 2017) that are effective in mitigating forgetting in class-incremental continual segmentation (Douillard et al. 2021) and show that they fail in ODICS. Meanwhile, although replay-based methods (Chaudhry et al. 2019) can effectively mitigate forgetting, they may not be feasible due to privacy concerns, e.g. , GDPR (Commission 2021).
Researcher Affiliation Collaboration Motasem Alfarra1,2, Zhipeng Cai1, Adel Bibi3, Bernard Ghanem2, Matthias M uller1 1Intel Labs 2King Abdullah University of Science and Technology (KAUST) 3University of Oxford
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We construct the stream by concatenating all domains based on the year the dataset was published, resulting in the following order: CS (2016) IDD (2019) BDD (2020) ACDC (2021), which mimics the nature of continual learning where data generated earlier will be seen by the model first.
Dataset Splits No The paper states: 'we use 80% of the publicly available data from each dataset for training and evaluate on the 20% held out test set from each domain.' This describes train and test splits, but no explicit validation split.
Hardware Specification Yes We utilized 2 NVIDIA V100 for each of our experiments.
Software Dependencies No The paper mentions using 'Deep Lab V3 architecture' and pre-training on 'Image Net', as well as using 'CARLA' and 'VIPER' simulators, but it does not specify version numbers for any software components or libraries.
Experiment Setup Yes During our experiments, at each time step t of ODICS, the model is presented with a batch of real images St of size 8, i.e. Bt = 8 t. Before the next time step t + 1, the model is allowed to train on the batch using a fixed computational budget, measured by the number N of forward and backward passes. Unless stated otherwise, we set N = 4 throughout our experiments1.