ADELT: Transpilation between Deep Learning Frameworks

Authors: Linyuan Gong, Jiayi Wang, Alvin Cheung

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental It outperforms state-of-the-art transpilers, improving pass@1 rate by 16.2 pts and 15.0 pts for Py Torch-Keras and Py Torch-MXNet transpilation pairs respectively. We provide open access to our code at https://github.com/gonglinyuan/adelt. 3 Experiments We evaluate the effectiveness of ADELT on the task of transpilation between Py Torch, Keras, and MXNet and compare our method with baselines. Table 1: Comparison between ADELT and other methods on source-to-source transpilation. 3.7 Ablation Studies We conduct ablation studies on Py Torch-Keras transpilation to validate the contribution of each part of ADELT.
Researcher Affiliation Academia Linyuan Gong , Jiayi Wang , Alvin Cheung University of California, Berkeley {gly, jiayi_wang, akcheung}@berkeley.edu
Pseudocode Yes Algorithm 1 Pseudo-code for domain-adversarial training.
Open Source Code Yes We provide open access to our code at https://github.com/gonglinyuan/adelt.
Open Datasets Yes For training, we construct a Py Torch-Keras-MXNet corpus of deep learning code from various Internet sources, containing 49,705 Py Torch modules, 11,443 Keras layers/models, and 4,785 MXNet layers/models. ... We consider 3 data sources Git Hub, Jui Ce, Kaggle to build our DL corpus: Git Hub: The Git Hub public dataset available on Google Big Query.3 ... Jui Ce: A code generation dataset [Agashe et al., 2019]... Kaggle: All files in KGTorrent [Quaranta et al., 2021], a dataset of Jupyter Notebooks from Kaggle4... 3https://console.cloud.google.com/marketplace/details/github/github-repos 4https://kaggle.com
Dataset Splits No The paper mentions an “evaluation benchmark” and an “unsupervised validation criterion” but does not provide specific details (percentages or counts) for train/validation/test dataset splits of its main corpus. It describes an evaluation benchmark but not how its primary DL corpus is split for training, validation, and testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It discusses software and models but no hardware specifications.
Software Dependencies No The paper mentions software like “Py BERT”, “Codex”, “GPT-3”, and “GPT-4” but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Adversarial training. The generator and discriminator of ADELT are multilayer perceptrons. We search the learning rate and batch size according to the unsupervised validation criterion average cosine similarity [Conneau et al., 2018], which measures the consistency between learned API keyword embeddings and generated keyword translations. Other hyperparameters are set based on previous studies [Conneau et al., 2018] with details described in Appendix A.2.