Progressive Generative Hashing for Image Retrieval
Authors: Yuqing Ma, Yue He, Fan Ding, Sheng Hu, Jun Li, Xianglong Liu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the widely-used image datasets demonstrate that PGH can significantly outperform stateof-the-art unsupervised hashing methods. |
| Researcher Affiliation | Academia | State Key Lab of Software Development Environment, Beihang University, China |
| Pseudocode | Yes | Algorithm 1 Progressive Generative Hashing (PGH) |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | MNIST [Le Cun et al., 1998] and CIFAR-10 [Krizhevsky, 2009]. |
| Dataset Splits | Yes | MNIST is a famous handwritten digits dataset containing a training set of 60,000 examples, and a test set of 10,000 examples. It has been widely used for computer vision tasks to test the generalization of different algorithms. CIFAR-10 dataset consists of 60,000 32 32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Pytorch' but does not specify a version number or other software dependencies with version details. |
| Experiment Setup | Yes | in all experiments we employ the Adam optimizer with a mini-batch size of 64. Besides, we fix the learning rate and the momentum to 0.0002 and 0.9, respectively. ... We set the start learning rate to 0.0003, and decrease it by 10% every 10 epochs. The mini-batch size is 32, and each image is normalized and rescaled to 224 224 as the network input. ... The start learning rate is 0.002, decreased by 10% every 20 epochs. Here, we use mini-batch size of 64, and all images are normalized to [ 1, 1]. |