Large Graph Hashing with Spectral Rotation
Authors: Xuelong Li, Di Hu, Feiping Nie
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on three large-scale benchmark datasets and the results show our method outperforms state-of-the-art hashing methods, especially the spectral graph ones. |
| Researcher Affiliation | Academia | School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi an 710072, Shaanxi, P. R. China xuelong li@opt.ac.cn, hdui831@mail.nwpu.edu.cn, feipingnie@gmail.com |
| Pseudocode | Yes | Algorithm 1 Large Graph Hashing with Spectral Rotation |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code available or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | There are three large-scale datasets used for evaluating the above methods, MNIST1 (Le Cun et al. 1998), CIFAR-102 (Krizhevsky and Hinton 2009), and You Tube Faces3 (Wolf, Hassner, and Maoz 2011). |
| Dataset Splits | Yes | The well-known MNIST dataset consists of 70,000 images associated with digits from 0 to 9, each of 784-dimensions. It is split into a training set of 69,000 samples and a testing set of 1,000 samples (100 samples for each digit). The CIFAR-10 dataset consists of 60,000 32 32 color images, with 6,000 images per object category. The testing set consists of 100 images of each category, which are uniformly randomly sampled. And the training set is with all remaining samples. The remaining samples in the built subset form the training set. |
| Hardware Specification | Yes | The experiments are conducted on desktop PC with a 4-core 3.20GHZ CPU and 16G RAM. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software libraries, frameworks, or environments used in the experiments (e.g., Python version, PyTorch version, etc.). |
| Experiment Setup | Yes | Considering the common setting of anchor number m and neighborhood number s (Shen et al. 2013; Liu et al. 2014), we set m = 300 and s = 3 in the experiments. And K-means is utilized to generate the anchor points of the training data. The iteration N is set to 20 for all the experiments. |