ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation

Authors: Youngmin Oh, Donghyeon Baek, Bumsub Ham

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our approach with extensive experiments on standard ISS benchmarks, and show that our method achieves a better trade-off in terms of accuracy and efficiency.
Researcher Affiliation Academia Youngmin Oh Donghyeon Baek Bumsub Ham School of Electrical and Electronic Engineering, Yonsei University
Pseudocode Yes We provide the pseudo code in the supplementary material.
Open Source Code Yes https://cvlab.yonsei.ac.kr/projects/ALIFE. ...Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes We evaluate our method on standard ISS benchmarks (PASCAL VOC [10] and ADE20K [40]). PASCAL VOC provides 10, 582 training [13] and 1, 449 validation samples with 20 object and one background categories, while ADE20K consists of 20, 210 and 2, 000 samples for training and validation, respectively, with 150 object/stuff categories.
Dataset Splits Yes PASCAL VOC provides 10, 582 training [13] and 1, 449 validation samples... while ADE20K consists of 20, 210 and 2, 000 samples for training and validation, respectively... For evaluation, we report Io U scores on the validation set for each dataset
Hardware Specification Yes We use 2 NVIDIA TITAN RTX GPUs for all experiments. Please refer to the supplementary material for details.
Software Dependencies No The paper mentions software components like Deep Lab-V3, Res Net-101, SGD optimizer, and Adam optimizer but does not specify their version numbers or the versions of underlying frameworks (e.g., PyTorch, TensorFlow).
Experiment Setup Yes We use the SGD optimizer with an initial learning rate set to 1e-2 and 1e-3 for base and incremental stages, respectively. ... Deep Lab-V3 is trained for 30 and 60 epochs at a base stage (t = 1) on PASCAL VOC [10] and ADE20K [40], respectively. ... We train rotation matrices for 10 epochs using the Adam optimizer with an initial learning rate of 1e-3, and fix a preset number S and a temperature value τ to 1, 000 and 10 for all experiments. We fine-tune a classifier for 1 epoch using the SGD optimizer with an initial learning rate of 1e-3. For all experiments, we adjust the learning rate by the poly schedule.