Non-exemplar Domain Incremental Object Detection via Learning Domain Bias

Authors: Xiang Song, Yuhang He, Songlin Dong, Yihong Gong

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental evaluations on two series of datasets demonstrate the effectiveness of the proposed LDB method in achieving high accuracy on new and old domain datasets. The code is available at https://github.com/SONGX1997/LDB.
Researcher Affiliation Academia Xiang Song1, Yuhang He2*, Songlin Dong2, Yihong Gong2 1School of Software Engineering, Xi an Jiaotong University 2College of Artificial Intelligence, Xi an Jiaotong University songxiang@stu.xjtu.edu.cn, heyuhang@xjtu.edu.cn, dsl972731417@stu.xjtu.edu.cn, ygong@mail.xjtu.edu.cn
Pseudocode No No structured pseudocode or algorithm blocks are provided.
Open Source Code Yes The code is available at https://github.com/SONGX1997/LDB.
Open Datasets Yes We adopt the Pascal VOC series and BDD100K series dataset to evaluate the effectiveness of our LDB on the DIOD task. The Pascal VOC series consists of datasets from four different domains: Pascal VOC 2007 (Everingham et al. 2010), Clipart, Watercolor, and Comic (Inoue et al. 2018). ... The BDD100K series consists of autonomous driving datasets from three different domains: BDD100k (Yu et al. 2020), Cityscape (Cordts et al. 2016) and Rainy Cityscape (Hu et al. 2019)...
Dataset Splits No The paper provides train and test set sizes for each dataset, but does not explicitly define a separate validation split for hyperparameter tuning across all experiments. For BDD100K, it states '10,000 images from the validation set for testing', which suggests the validation set is used as the test set in this context.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'Vi TDet', 'Image Net-1K', 'MAE', and 'Adam W' but does not specify their version numbers or other software dependencies with versions.
Experiment Setup Yes We train the model for 20 epochs (5 warm up epochs) using Adam W (Loshchilov and Hutter 2018) optimizer with a weight decay of 0.1. The learning rate is set to 2e-4, training batch size is set to 2, and input size is set to 1,024. More implementation details are provided in the Appendix.