Column Sampling Based Discrete Supervised Hashing
Authors: Wang-Cheng Kang, Wu-Jun Li, Zhi-Hua Zhou
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on datasets with semantic labels illustrate that COSDISH can outperform the state-of-the-art methods in real applications, such as image retrieval. Experiment We use real datasets to evaluate the effectiveness of our method. |
| Researcher Affiliation | Academia | Wang-Cheng Kang, Wu-Jun Li and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023 Department of Computer Science and Technology, Nanjing University, China kwc.oliver@gmail.com, {liwujun,zhouzh}@nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 Discrete optimization in COSDISH |
| Open Source Code | No | The paper mentions supplementary material with a URL (http://cs.nju.edu.cn/lwj/paper/COSDISH_sup.pdf) but does not explicitly state that source code for the methodology is provided at this link or elsewhere. |
| Open Datasets | Yes | Two image datasets with semantic labels are used to evaluate our method and the other baselines. They are CIFAR-10 (Krizhevsky 2009) and NUS-WIDE (Chua et al. 2009). |
| Dataset Splits | Yes | As in LFH (Zhang et al. 2014), for all datasets we randomly choose 1000 points as validation set and 1000 points as query (test) set, with the rest of the points as training set. |
| Hardware Specification | Yes | All the experiments are conducted on a workstation with 6 Intel Xeon CPU cores and 48GB RAM. |
| Software Dependencies | No | The paper mentions using LBFGSB but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Unless otherwise stated, we set Tsto = 10, Talt = 3 and |Ω| = q in our experiments. Furthermore, in our experiments we find that our algorithm is not sensitive to the initialization. Hence, we adopt random initialization in this paper. |