Asymmetric Discrete Graph Hashing

Authors: Xiaoshuang Shi, Fuyong Xing, Kaidi Xu, Manish Sapkota, Lin Yang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmark large-scale databases demonstrate its superior performance over the recent state of the arts with lower training time costs.
Researcher Affiliation Academia University of Florida, Gainesville, FL, 32611, USA
Pseudocode Yes Algorithm 1: ADGH Input: X Rd n, X Rd m, V Rn m, r, γ and λ. Output: B { 1, 1}n r, A Rd r. Initialize: t=0, let A0 be the eigenvectors of XV X and D0 = sgn( XA0), calculate M = (XXT + λId) 1X; Repeat Update Bt+1 = sgn(VDt + γXT At); Update Dt+1 = sgn(VT Bt+1); Update Ct+1 based on Theorem 1; Update At+1 = MCt+1; Until convergence
Open Source Code No The paper does not provide an explicit statement of code release or a link to a code repository for the described methodology.
Open Datasets Yes We evaluate the proposed algorithms ADGH and KADGH on three benchmark large-scale image databases: CIFAR10 (Torralba, Fergus, and Freeman 2008), You Tube (Wolf, Hassner, and Maoz 2011) and Image Net (Deng et al. 2009).
Dataset Splits Yes In our experiments, we split CIFAR-10 database into a training set (59K images) and a test query set (1K images), which consists of 10 categories with each containing 100 images. We also partition the selected set of You Tube face database into two parts: training and test query sets, where we randomly pick up 300 and 10 images of each individual for training and testing, respectively. For the subset of Image Net database, we randomly choose 1000 and 20 images from each category for training and testing, respectively.
Hardware Specification Yes All experiments are conducted using Matlab on a 3.50GHz Intel Xeon CPU with 128GB memory.
Software Dependencies No The paper states 'All experiments are conducted using Matlab' but does not specify a version number or other software dependencies with versions.
Experiment Setup Yes We set the regularization parameter λ = 0.01 for all ADGH, KADGH and SDGH, γ = 0.1 for ADGH, γ = 0.01 for KADGH, and search the best γ during [0.01, 100] for SDGH. We choose the same kernel as KSH for KADGH in experiments. To construct the affinity matrix V, we randomly select 10 percent samples of each class as anchors for ADGH and KADGH.