Deep Supervised Discrete Hashing
Authors: Qi Li, Zhenan Sun, Ran He, Tieniu Tan
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results have shown that our method outperforms current state-of-the-art methods on benchmark datasets. 3 Experiments We conduct extensive experiments on two public benchmark datasets: CIFAR-10 and NUS-WIDE. |
| Researcher Affiliation | Academia | Center for Research on Intelligent Perception and Computing National Laboratory of Pattern Recognition CAS Center for Excellence in Brain Science and Intelligence Technology Institute of Automation, Chinese Academy of Sciences |
| Pseudocode | No | The paper describes the optimization steps and equations but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper mentions 'Then we re-run the source code provided by the authors to obtain the retrieval performance' in reference to a comparison method (VDSH), but does not state that the source code for their own method is publicly available or provide a link. |
| Open Datasets | Yes | We conduct extensive experiments on two public benchmark datasets: CIFAR-10 and NUS-WIDE. |
| Dataset Splits | Yes | In CIFAR-10, we randomly select 100 images per class (1,000 images in total) as the test query set, 500 images per class (5,000 images in total) as the training set. For NUS-WIDE dataset, we randomly sample 100 images per class (2,100 images in total) as the test query set, 500 images per class (10,500 images in total) as the training set. The parameters of our algorithm are set based on the standard cross-validation procedure. |
| Hardware Specification | No | No specific hardware specifications (e.g., GPU models, CPU types, memory) used for running the experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions using the 'CNN-F network architecture' and comparing against methods like 'DQN', 'DHN', 'CNNH', etc., which imply the use of deep learning frameworks. However, no specific software names with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x) were provided. |
| Experiment Setup | Yes | The parameters of our algorithm are set based on the standard cross-validation procedure. µ, ν and η in Equation 11 are set to 1, 0.1 and 55, respectively. |