Nearest Neighbor Classifier Embedded Network for Active Learning

Authors: Fang Wan, Tianning Yuan, Mengying Fu, Xiangyang Ji, Qingming Huang, Qixiang Ye10041-10048

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that, with simple selection based on rejection or confusion confidence, NCE-Net improves state-of-the-arts on image classification and object detection tasks with significant margins.
Researcher Affiliation Academia 1 University of Chinese Academy of Sciences, Beijing, China 2 Tsinghua University, Beijing, China {wanfang, qmhuang, qxye}@ucas.ac.cn, {yuantianning19, fumengying19}@mails.ucas.ac.cn, xyji@tsinghua.edu.cn
Pseudocode No The paper does not contain structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes CIFAR-10 and CIFAR-100 were used as the benchmarks and the top-1 accuracy was used as the evaluation metric. ... For object detection, the PASCAL VOC 2007 and 2012 datasets (Mark et al. 2010) were used for evaluation.
Dataset Splits No The paper provides training and testing splits for datasets (e.g., '50000 for train and 10000 for test' for CIFAR-10), but does not explicitly mention a separate validation split or its size.
Hardware Specification Yes NCE-Net was implemented with Pytorch and run on a single NVIDIA RTX 2080Ti GPU. ... We evaluated the training time of NCE-Net and Softmax approaches on CIFAR10 using a NVIDIA GTX 1080Ti GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number for it or any other software dependencies.
Experiment Setup Yes The classification model was trained for 200 epochs in each cycles, where the learning rate were set to 0.1 for the first 160 epoch and decreased to 0.01 for the remaining epochs. The batch size was set to 128. The momentum and the weight decay were set to 0.9 and 0.0005, respectively.