Effective Abstract Reasoning with Dual-Contrast Network

Authors: Tao Zhuo, Mohan Kankanhalli

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the RAVEN and PGM datasets show that DCNet outperforms the state-of-the-art methods by a large margin of 5.77%. Further experiments on few training samples and model generalization also show the effectiveness of DCNet.
Researcher Affiliation Academia Tao Zhuo, Mohan Kankanhalli School of Computing, National University of Singapore zhuotao@nus.edu.sg, mohan@comp.nus.edu.sg
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/visiontao/dcnet.
Open Datasets Yes Similar to the previous works (Zhang et al., 2019b; Zheng et al., 2019; Wang et al., 2020), we conduct experiments on the RAVEN (Zhang et al., 2019a) and PGM (Santoro et al., 2018).
Dataset Splits Yes In each configuration, the dataset is randomly split into three parts, 6 folds for training, 2 for validation, and the remaining 2 for testing.
Hardware Specification Yes In addition, all models are trained and evaluated on a single GPU of NVIDIA Ge Force 1080 Ti with 11 GB memory.
Software Dependencies No The paper mentions the use of "Adam optimizer (Kingma & Ba, 2015)" but does not specify software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes During the training phase, a mini-batch size of 32 with Adam optimizer (Kingma & Ba, 2015) is employed to learn the network parameters, and the learning rate is set to 0.001 and fixed.