Hamming Compatible Quantization for Hashing
Authors: Zhe Wang, Ling-Yu Duan, Jie Lin, Xiaofang Wang, Tiejun Huang, Wen Gao
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiment results have shown our approach significantly improves the performance of various stateof-the-art hashing methods while maintaining fast retrieval speed. |
| Researcher Affiliation | Academia | Zhe Wang, Ling-Yu Duan, Jie Lin, Xiaofang Wang, Tiejun Huang, Wen Gao The Institute of Digital Media, Peking University, Beijing, China {zhew, lingyu, jielin, xiaofangwang, tjhuang, wgao}@pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 shows the pseudo-code. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | Extensive experiments were carried out over three widely used retrieval benchmark datasets, Label Me22K [Torralba et al., 2008], CIFAR-10 [Krizhevsky, 2009] and NUSWIDE [Chua et al., 2009]. |
| Dataset Splits | No | The paper mentions using a random selection of 1000 images for queries and the remaining for the database, and selecting 1000 images for training the quantization boundaries. It discusses parameter tuning using λ, which implies a validation process, but does not explicitly provide percentages or counts for training/validation/test splits, nor does it refer to standard validation splits for the entire dataset used for training their model (only for the quantization boundaries). |
| Hardware Specification | Yes | We measure the search time on an Intel(R) Core(TM) i5 3470 CPU at 3.20GHz with a single thread. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | In the following experiments, we set λ = 0.6, 0.7, 0.8, 0.9 at code size 32, 64, 128, 256, respectively. |