Deep Unsupervised Image Hashing by Maximizing Bit Entropy
Authors: Yunqiang Li, Jan van Gemert2002-2010
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the image datasets Flickr25k, Nus-wide, Cifar-10, Mscoco, Mnist and the video datasets Ucf-101 and Hmdb-51 show that our approach leads to compact codes and compares favorably to the current stateof-the-art. |
| Researcher Affiliation | Academia | Yunqiang Li and Jan van Gemert Computer Vision Lab, Delft University of Technology, Netherlands {y.li-19, j.c.vangemert}@tudelft.nl |
| Pseudocode | No | The paper describes the forward and backward passes using mathematical equations and textual explanations, but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | Yes | and make our code available1. 1https://github.com/liyunqianggyn/Deep-Unsupervised-Image-Hashing |
| Open Datasets | Yes | Flickr25k (Huiskes and Lew 2008) contains 25k images categorized into 24 classes. ... Nus-wide (Chua et al. 2009) has around 270k images ... Cifar-10 (Krizhevsky and Hinton 2009) consists of 60k color images ... Mscoco (Lin et al. 2014b) is a dataset ... Mnist (Le Cun et al. 1998) contains 70k gray-scale ... Ucf-101 (Soomro, Zamir, and Shah 2012) contains 13,320 action instances ... Hmdb-51 (Kuehne et al. 2011) includes 6,766 videos ... |
| Dataset Splits | No | The paper mentions 'validation loss saturates' and 'The hyper-parameters γ is tuned by cross-validation on training set', implying the use of validation. However, explicit details about the proportion or methodology of a dedicated validation dataset split, similar to the training and test splits, are not consistently provided for all datasets. |
| Hardware Specification | No | The paper mentions using pre-trained VGG-16 and ResNet backbones, but it does not specify any particular hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper discusses optimizers (SGD) and mentions pre-trained models (VGG-16, ResNet), but it does not provide specific version numbers for any software dependencies like programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or libraries. |
| Experiment Setup | Yes | During training, we use Stochastic Gradient Descent(SGD) as the optimizer with a momentum of 0.9 and a weight decay of 5 10 4 and a batch size of 32. In all experiments, the initial learning rate is set as 0.0001 and we divide the learning rate by 10 when the loss stop decreasing. The hyper-parameters γ is tuned by cross-validation on training set and set as γ = 3 1 N K . |