Towards Optimal Binary Code Learning via Ordinal Embedding
Authors: Hong Liu, Rongrong Ji, Yongjian Wu, Wei Liu
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two large-scale benchmark datasets demonstrate that the proposed OEH method can achieve superior performance over the state-of-the-arts approaches. |
| Researcher Affiliation | Collaboration | Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, 361005, China School of Information Science and Engineering, Xiamen University, 361005, China Best Image, Tencent Technology (Shanghai) Co.,Ltd, China Department of Electrical Engineering, Columbia University, New York, NY 10027, USA |
| Pseudocode | Yes | Algorithm 1 Ordinal Embedding Hashing (OEH) |
| Open Source Code | No | The paper states that 'The source codes of all the compared methods are available publicly.' and 'The source codes of the EMD+DTW is available publicly.' but does not provide any statement or link regarding the open-source availability of the OEH method's code. |
| Open Datasets | Yes | The CIFAR10 dataset consists of 60,000 color images in 10 classes... The Label Me dataset consists of 22,019 images... The corpora MIR-QBSH in MIREX for QBH is used as the evaluation dataset. |
| Dataset Splits | No | The CIFAR10 dataset... 1,000 images are randomly selected as the test set, and the remaining are used for training. The Label Me dataset... We randomly selected 2,000 images from the dataset as the test set, leaving remaining images for training. In each run, training and testing sets are randomly split with the ratios reported above. No specific mention of a separate validation split or how it was used in the experimental setup. |
| Hardware Specification | Yes | We implement our OEH hashing using Matlab on a PC with Intel Duo Core i7-3412 3.4GHz and 16G RAM. |
| Software Dependencies | No | The paper mentions 'Matlab' but does not specify a version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The learning rate η is set as 0.3 in all experiments. The landmarks are consisted of 500 features obtained by Kmeans clustering. The parameter α is set as 1 as a balance parameter. We set the regularization parameter λ as 0.001, and the parameter β as 1 in all our experiments for better search performance. |