Unsupervised Hashing with Contrastive Information Bottleneck

Authors: Zexuan Qiu, Qinliang Su, Zijing Ou, Jianxing Yu, Changyou Chen

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on three benchmark image datasets demonstrate that the proposed hashing method significantly outperforms existing baselines.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2School of Artificial Intelligence, Sun Yat-sen University, Guangdong, China 3CSE Department, SUNY at Buffalo
Pseudocode No The paper describes its methods using mathematical equations and diagrams, but does not include any pseudocode or explicitly labeled algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/qiuzx2/CIBHash.
Open Datasets Yes 1) CIFAR-10 is a dataset consisting of 60, 000 images from 10 classes [Krizhevsky and Hinton, 2009]. [...] 2) NUS-WIDE is a multi-label dataset containing 269, 648 images from 81 categories [Chua et al., 2009]. [...] 3) MSCOCO is a large-scale dataset for object detection, segmentation and captioning [Lin et al., 2014].
Dataset Splits No The paper describes splitting data into training, query, and database sets but does not explicitly mention a separate 'validation' set or split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch' and 'Adam optimizer' but does not specify their version numbers or any other software dependencies with versions.
Experiment Setup Yes For images from the three datasets, they are all resized to 224 224 3. [...] the encoder network fθ( ) is constituted by a pretrained VGG-16 network [Simonyan and Zisserman, 2015] followed by an one-layer Re LU feedforward neural network with 1024 hidden units. [...] During the training, following previous works [Su et al., 2018; Shen et al., 2019], we fix the parameters of pre-trained VGG-16 network, while only training the newly added feedfoward neural network. [...] the learning rate is set to be 0.001. The temperature τ is set to 0.3, and β is set to 0.001.