Deep Unsupervised Hashing with Latent Semantic Components
Authors: Qinghong Lin, Xiaojun Chen, Qin Zhang, Shaotian Cai, Wenzhe Zhao, Hongfa Wang7488-7496
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three benchmark datasets demonstrate that the proposed hierarchical semantic components indeed facilitate the hashing model to achieve superior performance.In this section, we conduct experiments on various public benchmark datasets to evaluate our DSCH method. |
| Researcher Affiliation | Collaboration | Qinghong Lin1*, Xiaojun Chen1 , Qin Zhang1, Shaotian Cai1, Wenzhe Zhao2, Hongfa Wang2 1Shenzhen University, Shenzhen, China 2Tencent Data Platform linqinghong@email.szu.edu.cn, {xjchen, qinzhang}@szu.edu.cn, cai.st@foxmail.com, {carsonzhao, hongfawang}@tencent.com |
| Pseudocode | Yes | Algorithm 1: Deep Semantic Component Hashing (DSCH) |
| Open Source Code | No | The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | We evaluate our methods on three public benchmark datasets, i.e. CIFAR-10, FLICKR25K, and NUS-WIDE. |
| Dataset Splits | No | The paper specifies training, query, and retrieval sets for the datasets (e.g., 'We randomly selected 100 images for each class as the query set, 1,000 in total. Then we used the remaining images as the retrieval set, among them, we randomly selected 1,000 images per class as the training set.' for CIFAR-10), but does not explicitly provide details for a 'validation' dataset split. |
| Hardware Specification | Yes | We conducted experiments on a workstation equipped with Intel Xeon Platinum 8255C CPU and Nvidia V100 GPU. |
| Software Dependencies | No | The paper mentions using VGG-19 and Adam optimization, but does not provide specific version numbers for software dependencies such as Python, PyTorch, or other relevant libraries. |
| Experiment Setup | Yes | The epoch number was set to 100 with batch size 128, and the factor τ we set as 1. We adopted the Adam optimization with learning rate η to 5e-4. The parameters m1 and m2 were set to {1000, 1000, 2000} and {900, 100, 1500} for CIFAR, FLICKR and NUS-WIDE respectively, and λ was fixed to 0.1 by default. |