Semantic Structure-based Unsupervised Deep Hashing
Authors: Erkun Yang, Cheng Deng, Tongliang Liu, Wei Liu, Dacheng Tao
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that SSDH significantly outperforms current state-of-the-art methods. We evaluate our method on two popular benchmark datasets, NUSWIDE and FLICKR25K . We provide extensive evaluations of the proposed hash codes and demonstrate their performance. |
| Researcher Affiliation | Collaboration | Erkun Yang1, Cheng Deng1 , Tongliang Liu2, Wei Liu3, Dacheng Tao2 1 School of Electronic Engineering, Xidian University, Xi an 710071, China 2 UBTECH Sydney AI Centre, SIT, FEIT, University of Sydney, Australia 3 Tencent AI Lab, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1: SSDH |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | Yes | We evaluate our method on two popular benchmark datasets, NUSWIDE and FLICKR25K. NUSWIDE contains 269,648 images... We randomly select 5,000 images as a test set. The remaining images are used as a retrieval set, from which we randomly select 5,000 images as a training set. FLICKR25K contains 25,000 images... We randomly select 2,000 images as the test set. The remaining images are used as the retrieval set, from which we randomly select 10,000 images as the training set. |
| Dataset Splits | No | The paper specifies training and test sets but does not explicitly mention a separate validation set or how it was used for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific cloud instance types) used for running the experiments. |
| Software Dependencies | No | We implement our approach using the open source Tensor Flow [Abadi et al., 2016]... The paper mentions TensorFlow but does not specify its version number or any other software dependencies with versions. |
| Experiment Setup | Yes | The mini-batch size is set to 24 and momentum to 0.9. Training images are resized to 224 224 as the inputs. The first seven layers of our neural network are fine-tuned from the model pre-trained with Image Net, and the last fully-connected layer is learnt from scratch. The learning rate is fixed at 0.001. The best result is obtained when α is 2, so we fix α to 2 in our other experiments. For other experiments in this paper, we select β as 1. |