Towards Optimal Discrete Online Hashing with Balanced Similarity

Authors: Mingbao Lin, Rongrong Ji, Hong Liu, Xiaoshuai Sun, Yongjian Wu, Yunsheng Wu8722-8729

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three widely-used benchmarks demonstrate the advantages of the proposed method over the stateof-the-art methods.
Researcher Affiliation Collaboration Mingbao Lin,1 Rongrong Ji,1,2 Hong Liu,1 Xiaoshuai Sun,1 Yongjian Wu,3 Yunsheng Wu3 1Fujian Key Laboratory of Sensing and Computing for Smart City, Department of Cognitive Science, School of Information Science and Engineering, Xiamen University, China 2 Peng Cheng Laboratory, China 3Tencent Youtu Lab, Tencent Technology (Shanghai) Co., Ltd, China
Pseudocode Yes Algorithm 1 Balanced Similarity for Online Discrete Hashing (BSODH)
Open Source Code No The paper does not provide an explicit statement or link for the availability of its source code.
Open Datasets Yes CIFAR-10 contains 60K samples from 10 classes, with each represented by a 4, 096-dimensional CNN feature (Simonyan and Zisserman 2015). ... Places205 is a 2.5-million image set with 205 classes. ... MNIST consists of 70K handwritten digit images with 10 classes, each of which is represented by 784 normalized original pixels.
Dataset Splits No The paper describes partitioning datasets into retrieval and test sets, and uses a subset of the retrieval set for learning hash functions. It mentions 'Parameter Sensitivity' experiments for tuning hyperparameters λt and σt, but does not explicitly define a 'validation dataset split' with specific percentages or counts.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers).
Experiment Setup Yes The left two figures in Fig.3 present the effects of the hyper-parameters λt and σt. ... The best combination for (λt, σt) is (0.6, 0.5). By conducting similar experiments on CIFAR-10 and Places-205, we finally set the tuple value of (λt, σt) as (0.3, 0.5) and (0.9, 0.8) for these two benchmarks. ... In our experiment, we set the tuple (ηs, ηd) as (1.2, 0.3) on MNIST. Similarly, it is set as (1.2, 0.2) on CIFAR-10 and (1, 0) on Places205.