Object Detection based Deep Unsupervised Hashing
Authors: Rong-Cheng Tu, Xian-Ling Mao, Bo-Si Feng, Shu-ying Yu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two public datasets demonstrate that the proposed method outperforms the state-of-the-art unsupervised hashing methods in the image retrieval task. |
| Researcher Affiliation | Collaboration | Rong-Cheng Tu1,2 , Xian-Ling Mao 1,3 , Bo-Si Feng1 , Shu-ying Yu1 , 1Department of Computer Science and Technology, Beijing Institute of Technology, China 2CETC Big Data Research Institute, China 3Zhijiang Lab, China {tu rc,maoxl,2120160986,syyu}@bit.edu.cn |
| Pseudocode | No | The paper describes its method using prose and mathematical equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We conduct experiments on two public benchmark datasets: Pascal VOC 2007 1 [Everingham et al., 2010] and BMVC 2009 2 [Allan and Verbeek, 2009]. 1http://host.robots.ox.ac.uk/pascal/VOC/voc2007/ 2http://pascal.inrialpes.fr/data2/flickr-bmvc2009/ |
| Dataset Splits | No | When carrying out experiments on the two datasets respectively, we randomly select 2,000 images as test set and the left images as training dataset. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using YOLOv2 and Alexnet (for pre-trained weights) and SGD as an optimization algorithm, but it does not provide specific version numbers for any software components or libraries. |
| Experiment Setup | Yes | The learning rate is initialized as 0.01. The hyper-parameters α, β in ODDUH are empirically set as 2 and 100, respectively, and will be discussed in section 4.5. And the learning rate is adjusted to one tenth of the current learning rate every one third of epoches. We adopt SGD with a mini-batch size of 128 as our optimization algorithm. |