Recurrent Relational Networks

Authors: Rasmus Palm, Ulrich Paquet, Ole Winther

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We achieve state of the art results on the b Ab I textual question-answering dataset with the recurrent relational network, consistently solving 20/20 tasks. We achieve state-of-the-art results amongst comparable methods by solving 96.6% of the hardest Sudoku puzzles. 3 Experiments
Researcher Affiliation Collaboration Rasmus Berg Palm Technical University of Denmark Tradeshift rapal@dtu.dk Ulrich Paquet Deep Mind upaq@google.com Ole Winther Technical University of Denmark olwi@dtu.dk
Pseudocode No The paper describes the algorithm using mathematical equations (1-4) and diagrams (Figure 1), but does not provide a formal pseudocode or algorithm block.
Open Source Code Yes Code to reproduce all experiments can be found at github.com/rasmusbergpalm/recurrent-relational-networks.
Open Datasets Yes b Ab I is a text based QA dataset from Facebook [Weston et al., 2015]. Pretty-CLEVR is available online as part of the code for reproducing experiments.
Dataset Splits Yes We create training, validation and testing sets totaling 216,000 Sudoku puzzles with a uniform distribution of givens between 17 and 34.
Hardware Specification Yes This research was supported by the NVIDIA Corporation with the donation of TITAN X GPUs.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries).
Experiment Setup Yes We train the network for three steps. We train our network for four steps. We run the network for 32 steps and at every step the output function r maps each node hidden state to nine output logits corresponding to the nine possible digits. We find that using dropout and appending the question encoding to the fact encodings is important for the performance.