Debiased Novel Category Discovering and Localization

Authors: Juexiao Feng, Yuhong Yang, Yanchun Xie, Yaqian Li, Yandong Guo, Yuchen Guo, Yuwei He, Liuyu Xiang, Guiguang Ding

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on the NCDL benchmark, and the results demonstrate that the proposed DRM approach significantly outperforms previous methods, establishing a new state-of-the-art.
Researcher Affiliation Collaboration 1Tsinghua University 2BNRist 3Hangzhou Zhuoxi Institute of Brain and Intelligence 4OPPO Research Institute 5Beijing University of Posts and Telecommunications
Pseudocode Yes The main steps of the algorithm are as follows: 1) Initially, a subset of training data is extracted, and K cluster centers are constructed using K-means. 2) Sample data is further extracted from the training set and added to the model, assigning them to the nearest cluster center. 3) The cluster centers of each cluster are updated. 4) Iteratively repeat steps 2 and 3 until the cluster centers stabilize or reach the maximum number of iterations.
Open Source Code No The paper does not provide a link or explicit statement about the availability of its own open-source code. It mentions reproducing a baseline's mechanism because it was not open-source.
Open Datasets Yes The datasets we mainly used are (1) the Pascal VOC 2007 dataset (Everingham et al. 2010) contains 10k images with 20 labeled categories. (2) The COCO 2014 dataset (Lin et al. 2014) contains 80 annotated categories with 80k images in the training set and 5k images in the validation set.
Dataset Splits Yes The COCO 2014 dataset (Lin et al. 2014) contains 80 annotated categories with 80k images in the training set and 5k images in the validation set.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper describes the methods and experimental setup, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, specific libraries and their versions).
Experiment Setup No The paper describes the proposed methods and datasets but does not explicitly detail specific experimental setup parameters such as learning rates, batch sizes, optimizer settings, or number of training epochs in the main text.