Adaptive Neural Compilation
Authors: Rudy R. Bunel, Alban Desmaison, Pawan K. Mudigonda, Pushmeet Kohli, Philip Torr
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our approach enables learning specifically-tuned algorithms for given data distributions with a high success rate. |
| Researcher Affiliation | Collaboration | Rudy Bunel Alban Desmaison University of Oxford University of Oxford rudy@robots.ox.ac.uk alban@robots.ox.ac.uk Pushmeet Kohli Philip H.S. Torr M. Pawan Kumar Microsoft Research University of Oxford University of Oxford pkohli@microsoft.com philip.torr@eng.ox.ac.uk pawan@robots.ox.ac.uk |
| Pseudocode | Yes | Figure 2b presents an 'Intermediary representation' which is a structured, step-by-step description of an algorithm in a code-like format. |
| Open Source Code | Yes | All the code required to reproduce these experiments is available online 1. 1https://github.com/alban D/adaptive-neural-compilation |
| Open Datasets | No | The paper describes tasks (e.g., Access, Swap) and refers to prior work for tasks (Kurach et al. [8]), but does not provide concrete access information (link, DOI, formal citation for a specific public dataset) for training data. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or specific computing environments) used for running experiments. |
| Software Dependencies | No | The paper mentions 'Training is performed using Adam [7]' but does not provide version numbers for Adam or any other software dependencies. |
| Experiment Setup | Yes | For each of these tasks, we perform a grid search on the loss parameters and on our hyper-parameters. Training is performed using Adam [7] and success rates are obtained by running the optimisation with 100 different random seeds. |