An End-To-End Graph Attention Network Hashing for Cross-Modal Retrieval

Authors: Huilong Jin, Yingxue Zhang, Lei Shi, Shuang Zhang, Feifei Kou, Jiapeng Yang, Chuangying Zhu, Jia Luo

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on the NUS-WIDE, MIRFlickr25K, and MS-COCO benchmark datasets show that our EGATH significantly outperforms against several state-of-the-art methods.
Researcher Affiliation Academia Huilong Jin1, Yingxue Zhang2, Lei Shi3,4 , Shuang Zhang1 , Feifei Kou5,6 Jiapeng Yang3, Chuangying Zhu6, Jia Luo7 1 College of Engineering, Hebei Normal University 2 College of Computer and Cyber Security, Hebei Normal University 3State Key Laboratory of Media Convergence and Communication, Communication University of China 4Key Laboratory of Education Informatization for Nationalities (Yunnan Normal University), Ministry of Education 5School of Computer Science, Beijing University of Posts and Telecommunications 6Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology 7Chongqing Research Institute, Beijing University of Technology
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The source code pertaining to our paper is accessible on Git Hub at the following link: https://github.com/beginner-retrieval/EGATH.
Open Datasets Yes The three datasets used in this study are listed below with their download links: MIRFlickr25K: http://press.liacs.nl/mirflickr/mirdownload.html; NUS-WIDE: https://lms.comp. nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html; MS-COCO: https://cocodataset.org/#download
Dataset Splits No Table 3: Partition of Datasets: Train, Query. (It does not explicitly mention validation splits, nor clarifies if “Query” is the test set or if there’s a separate test set used for evaluation beyond queries.)
Hardware Specification Yes Our EGATH method is implemented in Pytorch [26], and the experiments were conducted on a server equipped with an NVIDIA Ge Force RTX 3080 with 40GB of RAM.
Software Dependencies No Our EGATH method is implemented in Pytorch [26] (Pytorch version is not specified).
Experiment Setup Yes Finally, we set the batch size to 64 and employed the Adam [25] optimization strategy for the main optimization, with a weight decay set to 0.0005 and adopting a method of dynamically adjusting the learning rate. In this study, we set several parameters for experimental optimization: α=0.3, µ=0.5.