Label-Attended Hashing for Multi-Label Image Retrieval
Authors: Yanzhao Xie, Yu Liu, Yangtao Wang, Lianli Gao, Peng Wang, Ke Zhou
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on public multi-label datasets demonstrate that (1) LAH can achieve the state-of-the-art retrieval results and (2) the usage of co-occurrence relationship and MFB not only promotes the precision of hash codes but also accelerates the hash learning. |
| Researcher Affiliation | Academia | 1Huazhong University of Science and Technology 2The University of Electronic Science and Technology of China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Git Hub address: https: //github.com/IDSM-AI/LAH |
| Open Datasets | Yes | VOC2007. [Everingham et al., 2010] consists of 9,963 multi-label images and 20 object classes. MS-COCO. [Lin et al., 2014] is a popular multiple object dataset for image recognition, segmentation and captioning, which contains 118,287 training images, 40,504 validation images and 40,775 test images... FLICKR25K. [Huiskes and Lew, 2008] is a collection of 25,000 multi-label images belonging to 24 unique provided labels... |
| Dataset Splits | Yes | MS-COCO... contains 118,287 training images, 40,504 validation images and 40,775 test images... In the part of fch, we set the model parameters (γ = 1 and λ = 0.55) of LAH by cross-validation. |
| Hardware Specification | No | The paper mentions using PyTorch for implementation but does not specify any particular hardware details such as CPU/GPU models or memory. |
| Software Dependencies | No | The processing is implemented using Py Torch1. For network optimization, Stochastic Gradient Descent (SGD) [Amari, 1993] is used as the optimizer. Specific version numbers for PyTorch or other libraries are not provided. |
| Experiment Setup | Yes | For label co-occurrence embedding learning, our LAH consists of two GCN layers with output dimensionality of 1024 and 2048... we set τ = 0.4 and q = 0.2. we adopt Res Net-101 pre-trained on Image Net... mini-batch size is fixed as 256 and the raw images (input) are random resized into 448 448 using random horizontal flips. In the part of MFB, ... we set k = 350 for all datasets. For fair comparisons with other algorithms, we set G = 350. ...we set the model parameters (γ = 1 and λ = 0.55)... SGD... with 0.9 momentum and 10 4 weight decay. Note that all the results are obtained within 20 epochs. |