Asynchronous Teacher Guided Bit-wise Hard Mining for Online Hashing

Authors: Sheng Jin, Qin Zhou, Hongxun Yao, Yao Liu, Xian-Sheng Hua1717-1724

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on two public benchmarks demonstrate the favorable performance of our method over the state-of-the-arts.
Researcher Affiliation Collaboration 1The Harbin Institute of Technology, 2Alibaba DAMO Academy, Alibaba Group
Pseudocode Yes Algorithm 1 Asychronous Teacher-guided Bit-wise Hard Mining for Online Hashing (ATHOH)
Open Source Code No The paper does not include any statement or link providing access to open-source code for the described methodology.
Open Datasets Yes CIFAR-10 (Krizhevsky and Hinton 2009) contains 60K 32 32 images in 10 classes. We randomly select 1000 images to form the test set and 20K instances from the rest images to form a train set. The rest images are used as the database. NUS-WIDE (Chua et al. 2009) contains nearly 270k images with 81 classes. For NUS-WIDE, we follow (Weng and Zhu 2020) to use the images associated with the 21 most frequent concepts as the subset. We randomly select 2000 images as the test set and the remaining images are used as the training set and the database.
Dataset Splits No The paper specifies train and test sets but does not explicitly mention a separate validation set split or how such a split would be performed.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers.
Experiment Setup Yes The interval size ni is 200. The learning rates η is 0.2 and the parameters λ to balance informatic semantic loss and global knowledge distillation loss is 0.1, which is discussed in the Supplementary.