Complementary Binary Quantization for Joint Multiple Indexing
Authors: Qiang Fu, Xu Han, Xianglong Liu, Jingkuan Song, Cheng Deng
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments carried out on two popular large-scale tasks including Euclidean and semantic nearest neighbor search demonstrate that the proposed CBQ method enjoys the strong table complementarity and significantly outperforms the state-of-the-arts, with up to 57.76% performance gains relatively. |
| Researcher Affiliation | Academia | 1 State Key Lab of Software Development Environment, Beihang University, China 2 Center for Future Media and School of Computer Science and Engineering, University of Electronic Science and Technology of China, China 3 School of Electronic Engineering, Xidian University, China |
| Pseudocode | Yes | Algorithm 1 Complementary Binary Quantization (CBQ). |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is open-source or available. |
| Open Datasets | Yes | In the experiments, we randomly select 10,000 and 1,000 samples as the training and the testing set respectively. ... We employ the two widely-used datasets SIFT-1M and GIST-1M [Jegou et al., 2011] ... we choose two widely-used large-scale image datasets: CIFAR-10 and NUS-WIDE. |
| Dataset Splits | No | The paper mentions training and testing sets, but does not explicitly describe a validation set or its split. |
| Hardware Specification | No | The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | To start the algorithm, we initialize the prototypes P and the assignment m i for each samples using the classical K-means algorithm on the training data set. The number of clusters (or prototypes) is set to M = L 2b at first. Based on the initialization, we also estimate the scaling variable λ using the full binary codes in L hypercubes of b dimension... We set µ to 100 on SIFT-1M and 0.2 on GIST-1M. ... We set µ to 10 on CIFAR-10 and 20 on NUS-WIDE. |