Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

Authors: Taebum Kim, Eunji Jeong, Geon-Woo Kim, Yunmo Koo, Sehoon Kim, Gyeongin Yu, Byung-Gon Chun

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated Terra s performance improvement and coverage with ten imperative DL programs for several DNN architectures. The results show that Terra can speed up the execution of all ten imperative DL programs, whereas Auto Graph, one of the state-of-the-art systems, fails to execute five of them.
Researcher Affiliation Collaboration Taebum Kim Seoul National University, Friendli AI k.taebum@snu.ac.kr, ktaebum@friendli.ai Eunji Jeong Samsung Research eun-ji.jeong@samsung.com Geon-Woo Kim Seoul National University, Friendli AI gwsshs22@snu.ac.kr, gwsshs22@friendli.ai Yunmo Koo Seoul National University, Friendli AI mpbb03@snu.ac.kr, yunmorning@friendli.ai Sehoon Kim University of California, Berkeley sehoonkim@berkeley.edu Gyeong-In Yu Seoul National University gyeongin@snu.ac.kr Byung-Gon Chun Seoul National University, Friendli AI bgchun@snu.ac.kr, bgchun@friendli.ai
Pseudocode No No explicitly labeled 'Pseudocode' or 'Algorithm' block was found in the provided text. Figure 1 shows code examples, but these are illustrative problem cases, not pseudocode for the proposed system.
Open Source Code No No explicit statement or link providing access to the open-source code for the methodology described in the paper was found.
Open Datasets Yes For the experiments, we use ten imperative DL programs collected from open-source Git Hub repositories: Drop Block [12], BERT-Q&A [13], Music Transformer [21], SDPoint [22], BERT-CLS [24], GPT2 [34], DCGAN [37], Res Net50 [38], Faster RCNN [44], and YOLOv3 [45].
Dataset Splits No Experiment settings such as batch size and the dataset are included in Appendix E.' (Appendix E is not provided). No specific train/validation/test splits (e.g., 80/10/10) or absolute counts were mentioned for the datasets used.
Hardware Specification Yes We conduct all the experiments on a single machine that is equipped with 8-core AMD Ryzen 7 2700X @ 3.7GHz and an NVIDIA TITAN Xp GPU.
Software Dependencies Yes We use Tensor Flow [6] v2.4.1 as our baseline DL framework. We have built Terra on Tensor Flow v2.4.1... We use Ubuntu 18.04, CUDA 11.0, cu DNN 8.0, and Python 3.8.8.
Experiment Setup No The paper states 'Experiment settings such as batch size and the dataset are included in Appendix E.', but Appendix E is not provided. No explicit hyperparameters (e.g., learning rate, number of epochs, specific optimizer settings) or detailed training configurations are present in the main text.