LEARNING EXECUTION THROUGH NEURAL CODE FUSION

Authors: Zhan Shi, Kevin Swersky, Daniel Tarlow, Parthasarathy Ranganathan, Milad Hashemi

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively.
Researcher Affiliation Collaboration Zhan Shi The University of Texas at Austin zshi17@utexas.edu Kevin Swersky, Daniel Tarlow, Parthasarathy Ranganathan, Milad Hashemi Google Research {kswersky, dtarlow, parthas, miladh}@google.edu
Pseudocode No The paper does not contain a clearly labeled "Pseudocode" or "Algorithm" block or figure.
Open Source Code No The paper does not provide an explicit statement about releasing the source code or a link to a code repository for the described methodology.
Open Datasets Yes We use SPECint 2006 to evaluate our proposal. This is a standard benchmark suite commonly used to evaluate hardware and software system performance. (Sta, 2006)
Dataset Splits Yes We train the model on each benchmark independently. The first 70% of snapshots are used for training, and the last 30% for evaluation. ... These are split into 30 for training, 10 for validation (tuning the linear SVM described below) and 10 for testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions software tools like gcc, GNU binary utilities, and Pin, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes The hyperparameters for all models are given in Table 1. input feature size 64 hidden size 64 propagation steps 5 optimizer adam learning rate 0.01