Transfer Adversarial Hashing for Hamming Space Retrieval

Authors: Zhangjie Cao, Mingsheng Long, Chao Huang, Jianmin Wang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments validate that TAH yields state of the art Hamming space retrieval performance on standard datasets. Experiments We extensively evaluate the efficacy of the proposed TAH model against state of the art hashing methods on two benchmark datasets.
Researcher Affiliation Academia KLiss, MOE; NEL-BDS; TNList; School of Software, Tsinghua University, China {caozhangjie14,huangcthu}@gmail.com {mingsheng,jimwang}@tsinghua.edu.cn
Pseudocode No No pseudocode or algorithm block was explicitly found in the paper.
Open Source Code No The codes and configurations will be made available online.
Open Datasets Yes NUS-WIDE is a popular dataset for cross-modal retrieval, which contains 269,648 image-text pairs. Vis DA2017 is a cross-domain image dataset of images rendered from CAD models as synthetic image domain and real object images cropped from the COCO dataset as real image domain. 1http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm 2https://github.com/Vision Learning Group/taskcv-2017-public/tree/master/classification
Dataset Splits Yes We follow similar experimental protocols as DHN (Zhu et al. 2016) and randomly sample 100 images per category as queries, with the remaining images used as the database; furthermore, we randomly sample 500 images per category (each image attached to one category in sampling) from the database as training points. Similarly, we randomly sample 100 images per category of target domain as queries, and use the remaining images of target domain as the database and we randomly sample 500 images per category from both source domain and target domain as training points.
Hardware Specification No No specific hardware details (like GPU/CPU models or memory specifications) used for running experiments were mentioned in the paper. Only general mentions like 'fine-tune convolutional layers...from the Alex Net model pre-trained on Image Net 2012' and 'implement TAH based on the Caffe framework' are present, without hardware specifics.
Software Dependencies No We adopt the Alex Net architecture (Krizhevsky, Sutskever, and Hinton 2012) for all deep hashing methods, and implement TAH based on the Caffe framework (Jia et al. 2014). No specific version numbers for Caffe or other libraries are provided.
Experiment Setup Yes We set its learning rate to be 10 times that of the lower layers. We use mini-batch stochastic gradient descent (SGD) with 0.9 momentum and the learning rate annealing strategy implemented in Caffe. The penalty of adversarial networks mu is increased from 0 to 1 gradually as Rev Grad (Ganin and Lempitsky 2015). We cross-validate the learning rate from 10 5 to 10 3 with a multiplicative step-size 10 1 2 . We fix the mini-batch size of images as 64 and the weight decay parameter as 0.0005.