MLS3RDUH: Deep Unsupervised Hashing via Manifold based Local Semantic Similarity Structure Reconstructing
Authors: Rong-Cheng Tu, Xian-Ling Mao, Wei Wei
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three public datasets show that the proposed method outperforms the state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Technology, Beijing Institute of Technology, China 2CETC Big Data Research Institute Co., Ltd., Guiyang 55002, China 3Zhijiang Lab, Hangzhou, China 4School of Computer Science, Huazhong University of Science and Technology, China |
| Pseudocode | Yes | Algorithm 1 Learning algorithm for MLS3RDUH |
| Open Source Code | No | The paper does not provide an explicit statement or link to its open-source code. |
| Open Datasets | Yes | Three benchmark image retrieval datasets are used for evaluation, i.e., NUS-WIDE [Chua et al., 2009], MS COCO [Lin et al., 2014] and CIFAR10 [Krizhevsky et al., 2009] |
| Dataset Splits | No | While the paper mentions that MS COCO contains training and validation images, the authors then combine them to form a larger dataset from which they sample their *own* training and test sets. For NUS-WIDE and CIFAR10, they only specify training and test sets, without a distinct validation split for their experimental setup. |
| Hardware Specification | No | The paper does not specify the hardware used for the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions "Pytorch framework" but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The parameters in the first seven layers of hashing model are initialized with the parameters of the first seven layers of Alexnet which is pretrained on Image Net, and the parameters in the eight layer of hashing model are initialized by Xavier initialization [Glorot and Bengio, 2010]. We use mini-batch stochastic gradient descent (SGD) with 0.9 momentum and the learning rate is fixed to 0.04. The iteration number is 150. We fix the minibatch size of images as 128 and the weight decay parameter as 10 5. We set k=0.06N, o=0.06N where N is the number of training datapoints, and following [Zhou et al., 2004b], the hyper-parameters α is set as 0.99. |