Learning to Hash Naturally Sorts
Authors: Jiaguo Yu, Yuming Shen, Menghan Wang, Haofeng Zhang, Philip H.S. Torr
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments show the proposed NSH model significantly outperforms the existing unsupervised hashing methods on three benchmarked datasets. |
| Researcher Affiliation | Collaboration | 1Nanjing University of Science and Technology 2University of Oxford 3e Bay {yujiaguo, zhanghf}@njust.edu.cn, {yuming.shen, philip.torr}@eng.ox.ac.uk, menghanwang@ebay.com |
| Pseudocode | Yes | Algorithm 1: The Training Procedure of NSH |
| Open Source Code | No | The paper does not contain an explicit statement about making the source code for their methodology openly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | CIFAR-10 [Krizhevsky and Hinton, 2009] comes with 60,000 images. NUS-WIDE [Chua et al., 2009] has of 81 categories of images. MS COCO [Lin et al., 2014] is a benchmark for multiple tasks. |
| Dataset Splits | Yes | CIFAR-10 [Krizhevsky and Hinton, 2009] comes with 60,000 images. We follow [Ghasedi Dizaji et al., 2018] to have a 50,000-10,000 train-test split. NUS-WIDE [Chua et al., 2009]... 100 images of each class are utilized as a query set, with the remaining being the gallery. MS COCO [Lin et al., 2014]... We randomly select 5,000 images as queries with the remaining ones the database. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The proposed method is implemented with Tensorflow. However, no version number for TensorFlow or any other software dependency is provided. |
| Experiment Setup | Yes | We use the Adam optimizer [Kingma and Ba, 2015] to train the networks with a learning rate of 1e-5, and the batch size is 50. We train the model for 200 epochs at most. All the images are resized to 224x224x3 and we adopt the image augmentation strategies of MoCo-v2 [Chen et al., 2020b]. We use the ResNet-50 [He et al., 2016] until the last pooling layer and top two fully-connected layers as the hash head and the latent feature head. The contrastive temperature τc and the number of positive samples m we picked was set to {0.1, 0.5, 0.5} and {2, 3, 3} for CIFAR-10, NUS-WIDE and MS COCO. Following [Prillo and Eisenschlos, 2020], the softsort temperature is set to the code length τs = db. |