CIMON: Towards High-quality Hash Codes

Authors: Xiao Luo, Daqing Wu, Zeyu Ma, Chong Chen, Minghua Deng, Jinwen Ma, Zhongming Jin, Jianqiang Huang, Xian-Sheng Hua

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several benchmark datasets show that the proposed method outperforms a wide range of state-of-the-art methods in both retrieval performance and robustness.
Researcher Affiliation Collaboration 1School of Mathematical Sciences, Peking University, China 2DAMO Academy, Alibaba Group, Hangzhou, China 3School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China
Pseudocode Yes Algorithm 1 CIMON s Training Algorithm
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes FLICKR25K [Huiskes and Lew, 2008], CIFAR-10 [Krizhevsky et al., 2009], NUSWIDE [Chua et al., 2009]
Dataset Splits No The paper describes training, query, and retrieval sets, but does not explicitly mention or specify a separate validation dataset split.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes The mini-batch size is set to 24. The learning rate is fixed at 0.001. For all three datasets, training images are resized to 224 224 as inputs. Data augmentation we adopt includes random cropping and resizing, rotation, cutout, color distortion and Gaussian blur. As two introduced hyper-parameters, η and the number of clusters K in spectral clustering are set to 0.3 and 70 as default.