Tree-to-tree Neural Networks for Program Translation

Authors: Xinyun Chen, Chang Liu, Dawn Song

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.
Researcher Affiliation Academia Xinyun Chen UC Berkeley xinyun.chen@berkeley.edu Chang Liu UC Berkeley liuchang2005acm@gmail.com Dawn Song UC Berkeley dawnsong@cs.berkeley.edu
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an explicit statement or link to the open-source code for the described methodology.
Open Datasets No The paper describes the generation and characteristics of its datasets, and refers to using existing open-source projects for one dataset, but it does not provide concrete access information (link, DOI, formal citation of a public dataset) for the specific datasets used in their experiments. The grammar for generating data is mentioned as being in supplementary material, but not the generated dataset itself.
Dataset Splits Yes To build the dataset, we randomly generate 100,000 pairs of source and target programs for training, 10,000 pairs as the development set, and 10,000 pairs for testing.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper mentions software components but does not provide specific version numbers for reproducibility.
Experiment Setup No The hyper-parameters used in different models can be found in the supplementary material. This means that the specific experimental setup details are not provided in the main text of the paper.