Redundancy-resistant Generative Hashing for Image Retrieval
Authors: Changying Du, Xingyu Xie, Changde Du, Hao Wang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that our new method can significantly boost the quality of learned codes and achieve state-of-the-art performance for image retrieval. |
| Researcher Affiliation | Collaboration | Changying Du1, Xingyu Xie2, Changde Du3, Hao Wang1 1 360 Search Lab, Beijing 100015, China 2 College of Automation, Nanjing University of Aeronautics and Astronautics, Nanjing, China 3 Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., repository links or explicit statements of code release) for its methodology. |
| Open Datasets | Yes | We evaluate the proposed method on two computer vision tasks: 1) Image generation/reconstruction on MNIST [Oliva and Torralba, 2001]; 2) Image retrieval on CIFAR10 [Krizhevsky, 2009] and Caltech-256 [Griffin et al., 2007]. |
| Dataset Splits | No | The paper describes training and query/gallery splits for CIFAR-10 and Caltech-256 datasets but does not explicitly mention or specify a validation set split. |
| Hardware Specification | No | The paper mentions 'modern CPU/GPU' generally but does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Parameter Settings For the compared methods, we use the implementations provided by their authors (Deep-SGH is implemented directly based on SGH) and set the parameters according to their original papers. Without explicit statement, 1) for our R-SGH, the prior parameter ρj is set to 0.5 for any j {1, .., K}, the threshold parameter ϵ is set to 0.05, and both δ and η are set to 0.01; and 2) for R-SGH and Deep SGH, the encoder and decoder network structures are set as [D-K-K-K] and [K-K-K-D] respectively, where D and K are the dimensions of input data and hash code respectively. |