Boosting Complementary Hash Tables for Fast Nearest Neighbor Search
Authors: Xianglong Liu, Cheng Deng, Yadong Mu, Zhujin Li
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments carried out on two popular tasks including Euclidean and semantic nearest neighbor search demonstrate that the proposed boosted complementary hash-tables method enjoys the strong table complementarity and significantly outperforms the state-of-the-arts. |
| Researcher Affiliation | Academia | State Key Lab of Software Development Environment, Beihang University, Beijing 100191, China School of Electronic Engineering, Xidian University, Xi an 710071, Shaanxi, China Institute of Computer Science and Technology, Peking University, Beijing 100080, China |
| Pseudocode | Yes | Algorithm 1 Boosted Complementary Hash-Tables (BCH). |
| Open Source Code | No | No explicit statement or link providing open-source code for the methodology was found in the paper. |
| Open Datasets | Yes | We employ two widely-used large data sets: SIFT1M and GIST-1M1, consisting of one million 128-D SIFT and 960-D GIST features respectively. [...] We employ two widely-used large image datasets: CIFAR-102 and NUS-WIDE3. (1)http://corpus-texmex.irisa.fr (2)https://www.cs.toronto.edu/~kriz/cifar.html (3)lms.comp.nus.edu.sg/research/NUS-WIDE.htm |
| Dataset Splits | No | For each dataset, we construct a training and a testing set respectively with 10,000 and 3,000 random samples. |
| Hardware Specification | Yes | All experiments are conducted on a workstation with Intel Xeon CPU E5-4607@2.60GHz and 48GB memory |
| Software Dependencies | No | No specific software dependencies with version numbers were mentioned in the paper. |
| Experiment Setup | Yes | As to BCH, for each training sample we choose 50 homogenous neighbors and 100 heterogenous neighbors based on Euclidean distance. Moreover, the parameter λ is simply set to 1.0. |