Supervised Deep Hashing for Hierarchical Labeled Data

Authors: Dan Wang, Heyan Huang, Chi Lu, Bo-Si Feng, Guihua Wen, Liqiang Nie, Xian-Ling Mao

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world public datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.
Researcher Affiliation Academia 1Beijing Institute of Technology, China 2South China University of Technology, China 3Shandong University, China
Pseudocode Yes Algorithm 1 The Learning Algorithm for SHDH
Open Source Code No The paper does not provide any statement or link indicating that the source code for the proposed SHDH method is publicly available.
Open Datasets Yes We carried out experiments on two public benchmark datasets: CIFAR-100 and IAPRTC-12. CIFAR-100 is an image dataset containing 60,000 colour images of 32 32 pixels. It has 100 classes and each class contains 600 images. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image has a fine label (the class which it belongs to) and a coarse label (the superclass which it belongs to). Thus, the height of the hierarchical labels with a Root node in CIFAR-100 is three. The IAPRTC-12 dataset has 20,000 segmented images.
Dataset Splits No For both datasets, we randomly selected 90% as the training set and the left 10% as the test set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions pre-trained weights from VGG-F, but does not specify any software dependencies with version numbers (e.g., specific programming language versions, deep learning frameworks, or libraries).
Experiment Setup Yes The hyper-parameter α in SHDH is empirically set as one. The learning rate η is initialized as 0.01, and updated by η 2/3η empirically. The size of minibatch (default 128).