Forgetting to Learn Logic Programs
Authors: Andrew Cropper3676-3683
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that Forgetgol outperforms the alternative approaches when learning from over 10,000 tasks. |
| Researcher Affiliation | Academia | Andrew Cropper University of Oxford andrew.cropper@cs.ox.ac.uk |
| Pseudocode | Yes | Algorithm 1 Forgetgol |
| Open Source Code | Yes | All the experimental data are available at https://github.com/andrewcropper/aaai20-forgetgol. |
| Open Datasets | Yes | All the experimental data are available at https://github.com/andrewcropper/aaai20-forgetgol. |
| Dataset Splits | No | The paper uses a multi-task learning setup where tasks are generated. It measures performance on the percentage of tasks solved and learning times, but does not specify traditional train/validation/test splits for each task's underlying examples, nor does it specify a validation split for the overall set of tasks. |
| Hardware Specification | No | The paper does not specify the hardware used for the experiments. |
| Software Dependencies | No | The paper mentions Metagol, a MIL system based on a Prolog meta-interpreter, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We enforce a timeout of 60 seconds per task per search depth. We set the maximum program size to 6 clauses. |