Scalable Graph Hashing with Feature Transformation

Authors: Qing-Yuan Jiang, Wu-Jun Li

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two datasets with one million data points show that our SGH method can outperform the state-of-the-art methods in terms of both accuracy and scalability.
Researcher Affiliation Academia Qing-Yuan Jiang and Wu-Jun Li National Key Laboratory for Novel Software Technology Collaborative Innovation Center of Novel Software Technology and Industrialization Department of Computer Science and Technology, Nanjing University, China jiangqy@lamda.nju.edu.cn, liwujun@nju.edu.cn
Pseudocode Yes Algorithm 1 Sequential learning algorithm for SGH
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes We evaluate our method on two widely used large-scale benchmark datasets: TINY-1M [Liu et al., 2014] and MIRFLICKR-1M [Huiskes et al., 2010].
Dataset Splits No For each dataset, we randomly select 5000 data points to construct the test (query) set and the remaining points will be used for training. No explicit mention of a separate validation set or validation split percentages/counts was found.
Hardware Specification Yes All the experiments are conducted on a workstation with Intel (R) CPU E5-2620V2@2.1G 12 cores and 64G RAM.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their respective versions) that were used in the experiments.
Experiment Setup Yes For kernel feature construction, we use Gaussian kernel and take 300 randomly sampled points as kernel bases for our method. We set the parameter ρ = 2 in P(X) and Q(X). Here, γ is a very small positive number to avoid numerical problems, which is 10 6 in our experiments.