Programming with a Differentiable Forth Interpreter

Authors: Matko Bošnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. We evaluate @4 on three tasks.
Researcher Affiliation Academia 1Department of Computer Science, University College London, London, UK 2Department of Computer Science, University of Oxford, Oxford, UK 3Department of Theoretical and Applied Linguistics, University of Cambridge, Cambridge, UK.
Pseudocode Yes Listing 1: Three code alternatives (white lines are common to all, coloured/lettered lines are alternative-specific): i) Bubble sort in Forth (a lines green), ii) PERMUTE sketch (b lines blue), and iii) COMPARE sketch (c lines yellow). Listing 2: Manipulate sketch (a lines green) and the choose sketch (b lines blue) for Elementary Addition. Listing 3: Core of the Word Algebra Problem sketch.
Open Source Code No The paper states that TensorFlow is "Software available from tensorflow.org", but does not provide any statement or link indicating that their own implementation code (@4 or experimental code) is open-source or publicly available.
Open Datasets Yes We evaluate the model on the Common Core (CC) dataset, introduced by Roy & Roth (2015).
Dataset Splits No The paper mentions varying training and test set lengths (e.g., "For a given test sequence length, we vary the training set lengths..."), but does not provide specific details on train/validation/test dataset splits, only implicitly discussing training and testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using "TensorFlow (Abadi et al., 2015) implementation" but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup No The paper states, "Full details of the experimental setup can be found in Appendix E." However, Appendix E is not provided in the given text, and the main body does not contain specific hyperparameters or system-level training settings.