Static-Dynamic Co-teaching for Class-Incremental 3D Object Detection

Authors: Na Zhao, Gim Hee Lee3436-3445

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on two benchmark datasets and demonstrate the superior performance of our SDCo T over baseline approaches in several incremental learning scenarios.
Researcher Affiliation Academia Na Zhao, Gim Hee Lee Department of Computer Science, National University of Singapore {nazhao, gimhee.lee}@comp.nus.edu.sg
Pseudocode No The paper does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Na-Z/SDCo T.
Open Datasets Yes We evaluate SDCo T on the SUN RGB-D 3D object detection benchmark and Scan Net dataset. SUN RGB-D (Song, Lichtenberg, and Xiao 2015)... Scan Net (Dai et al. 2017)...
Dataset Splits Yes SUN RGB-D (Song, Lichtenberg, and Xiao 2015) consists of 5,285 training samples and 5,050 validation samples for hundreds of object classes. Scan Net (Dai et al. 2017) consists of 1,201 training samples and 312 validation samples, where there is no amodal oriented 3D bounding boxes but point-level semantic segmentation labels.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes We set τo and τc that control the selection of pseudo labels as 0.95 and 0.9, respectively. The weights in the loss function (i.e. Eq. 2) are set as λs=10, λd=1, λc=10. We adopt a rampup technique (Tarvainen and Valpola 2017) to schedule the respective contributions of λd and λc. Specifically, λd and λc ramp up from 0 to their corresponding maximum value during the first 30 epochs, using a sigmoid-shaped function e 5(1 t)2, where t increases linearly from 0 to 1 during the ramp-up period. Following SESS, we set α in EMA as 0.99 during the ramp-up period and raise it to 0.999 in the following training. The base model ΦB and the student network ΦB N are trained by an Adam optimizer. The initial learning rate for ΦB is set to 0.001 and then decayed by 0.1 at the 80th and 120th epoch.