Supervised Short-Length Hashing

Authors: Xingbo Liu, Xiushan Nie, Quan Zhou, Xiaoming Xi, Lei Zhu, Yilong Yin

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on four image benchmarks demonstrate the superior performance of the proposed SSLH with short-length hash codes. In addition, the proposed SSLH outperforms the existing methods, with long-length hash codes. In this section, we present the experimental settings and results. The hyperparameter settings employed are listed below. Extensive experiments were conducted on four image datasets to evaluate the proposed method and compare it with several state-of-the-art methods.
Researcher Affiliation Academia Xingbo Liu1 , Xiushan Nie2, , Quan Zhou3 , Xiaoming Xi2 , Lei Zhu4 and Yilong Yin3, 1School of Computer Science and Technology, Shandong University, Jinan, P.R. China 2School of Computer Science and Technology, Shandong Jianzhu University, Jinan, P.R. China 3School of Software, Shandong University, Jinan, P.R. China 4School of Information Science and Engineering, Shandong Normal University, Jinan, P.R. China
Pseudocode Yes Algorithm 1 Supervised Short-Length Hashing (SSLH) Input: Training set: X; semantic label: Y; code length: L; hyperparameters: δ, d, α, β, γ, and λ; number of iterations: T. 1: Initialize W, P, and U as random zero-centered matrices, and H as a random { 1, 1}L n matrix. Calculate V using the kernel function according to Eq. (3). 2: repeat 3: W-Step Use Eq. (11) to solve W, while the other variables are fixed. 4: H-Step Use Eq. (16) to solve H bit-by-bit, while the other variables are fixed. 5: P-Step Use Eq. (18) to solve P, while the other variables are fixed. 6: U-Step Use Eq. (20) to solve U, while the other variables are fixed. 7: D-Step Use Eq. (2) to solve D, while the other variables are fixed. 8: until Convergence or a fixed number of iterations. Output: Hash matrix: H; projection matrix: P.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Four extensively-used image datasets were utilized in the experiments: CIFAR-10 1 [Krizhevsky and Hinton, 2009], CALTECH-101 2 [Fei-Fei et al., 2007], MS-COCO 3 [Lin et al., 2014], and NUS-WIDE 4 [Chua et al., 2009]. 1 https://www.cs.toronto.edu/ kriz/cifar.html. 2 http://www.vision.caltech.edu/Image Datasets/Caltech101/. 3 http://cocodataset.org/. 4 http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm.
Dataset Splits No CIFAR-10 is a single-label dataset containing 60,000 images belonging to 10 classes, with 6,000 images per class. We randomly selected 5,000 and 1,000 images (100 images per class) from the dataset as the training and testing sets, respectively. The paper specifies training and testing sets but does not provide details for a separate validation set split.
Hardware Specification Yes The experiments were performed on a computer with an Intel(R) Core(TM) i7-4790 CPU and 32-GB RAM.
Software Dependencies No The paper mentions using a "CNN-F model" and a "kernel function" but does not list specific software dependencies with their version numbers required for replication (e.g., Python, library versions like PyTorch, TensorFlow, or specific solvers).
Experiment Setup Yes As the experimental parameters, we empirically set α = β = γ = λ = 10 4 and δ = 2.