Synchromesh: Reliable Code Generation from Pre-trained Language Models

Authors: Gabriel Poesia, Alex Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, Sumit Gulwani

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our methods by synthesizing code from natural language descriptions using GPT-3 and Codex in three real-world languages: SQL queries, Vega-Lite visualizations and SMCal Flow programs. We observe substantial complementary gains from CSD and TST in prediction accuracy and in effectively preventing run-time errors.
Researcher Affiliation Collaboration Gabriel Poesia Stanford University poesia@stanford.edu Oleksandr Polozov X, the moonshot factory polozov@google.com Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, Sumit Gulwani Microsoft Research, Redmond {levu,astiwar,gustavo.soares,meek,sumitg}@microsoft.com
Pseudocode Yes We provide the same algorithm in pseudo-code in Algorithms 1 and 2 in Figure 6 below.
Open Source Code No The paper does not provide an unambiguous statement of releasing the source code for the methodology or a direct link to a code repository.
Open Datasets Yes For SQL, we use the Spider dataset (Yu et al., 2018). For Vega-Lite, we use the NLV Corpus (Srinivasan et al., 2021). For SMCal Flow, we use the dataset that introduced the language (Andreas et al., 2020).
Dataset Splits Yes In Spider and SMCal Flow, we use the training/validation set split given in each dataset.
Hardware Specification No Training took around 3 hours on a single GPU. Our only access to the models was through the public Open AI HTTP API.
Software Dependencies No To select examples, we use Sentence-BERT (Reimers & Gurevych, 2019) to fetch the 5 closest examples by cosine similarity. To facilitate this process, we created a library that extends any parser generated by ANTLR (Parr & Fisher, 2011).
Experiment Setup Yes We used the Adam W optimizer with a learning rate of 2 10 5 the default parameters in the S-BERT library. We sample from Codex with a temperature τ = 0.7 to obtain diverse but high-quality samples.