SSAH: Semi-Supervised Adversarial Deep Hashing with Self-Paced Hard Sample Generation

Authors: Sheng Jin, Shangchen Zhou, Yao Liu, Chao Chen, Xiaoshuai Sun, Hongxun Yao, Xian-Sheng Hua11157-11164

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our method can significantly improve state-of-the-art models on both the widely-used hashing datasets and fine-grained datasets.
Researcher Affiliation Collaboration 1The Harbin Institute of Technology, 2Alibaba DAMO Academy, Alibaba Group 3Nanyang Technological University, 4Xiamen University
Pseudocode Yes Algorithm 1 Self-paced Deep Adversarial Hashing
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository for the described methodology.
Open Datasets Yes CIFAR-10 (Krizhevsky and Hinton 2009) is a small image dataset including 60k 32 32 images in 10 classes. NUS-WIDE (Chua et al. 2009) contains nearly 270k images with 81 semantic concepts. Stanford Dogs-120 (Nilsback and Zisserman 2006) dataset consists of 20,580 images in 120 mutually classes. CUB Bird (Wah et al. 2011) includes 11,788 images in mutually 200 classes.
Dataset Splits Yes For NUS-WIDE, we follow (Liu et al. 2011) to use the images associated with the 21 most frequent concepts, where each of these concepts associated with at least 5,000 images. Following (Liu et al. 2011; Wang et al. 2018), we randomly sample 100 images per class as the test set, and the others are as a database. In the training process, we randomly sample 500 images per class from the database as labeled data, and the others are as unlabeled data. We directly use test set defined in these datasets. The train set is used as a database. In the training process, we randomly sample 50% images per class s from the database as labeled data, and the others are as unlabeled data.
Hardware Specification No The paper does not specify any particular hardware (GPU/CPU models, memory, etc.) used for running its experiments.
Software Dependencies No The paper mentions "PyTorch" but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes The value of hyper-parameter λ1 is 1.0, λ2 is 0.5, α is 0.5 and β is 0.1. We use the mini-batch stochastic gradient descent with 0.9 momentum. We set the value of the margin parameters ω as 0.1, which increases 0.02 every 5 epochs. The mini-batch size of images is fixed as 32 and the weight decay parameter as 0.0005. The value of the number of the rotated hard samples n is 3.