Copy That! Editing Sequences by Copying Spans

Authors: Sheena Panthaplackel, Miltiadis Allamanis, Marc Brockschmidt13622-13630

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental Evaluation We evaluate our RNN-based implementation on two types of tasks. First, we evaluate the performance of our models in the setting of learning edit representations (Yin et al. 2019) for natural language and code changes. Second, we consider correction-style tasks in which a model has to identify an error in an input sequence and then generate an output sequence that is a corrected version of the input.
Researcher Affiliation Collaboration Sheena Panthaplackel,1* Miltiadis Allamanis,2 Marc Brockschmidt2 1 The University of Texas at Austin, Texas, USA 2 Microsoft Research, Cambridge, UK
Pseudocode Yes Figure 3: Python-like pseudocode of beam search for span-copying decoders.
Open Source Code No The paper does not provide any statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets Yes We perform our experiments on the datasets used by Yin et al. (2019). ... To test this hypothesis, we use the two bug-fix pair (BFP) datasets of Tufano et al. (2019). ... We use training/validation folds of the FCE (Yannakoudakis, Briscoe, and Medlock 2011) and W&I+LOCNESS (Granger 1998; Bryant et al. 2019) datasets for training and test on the test fold of the FCE dataset.
Dataset Splits No The paper mentions 'training/validation folds' but does not provide specific percentages or counts for the dataset splits used for training, validation, or testing.
Hardware Specification Yes For all experiments, we use a single NVidia K80 GPU.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the implementation, only general types of models like 'RNN-based' or 'bi GRU'.
Experiment Setup Yes Our editor models have a 2-layer bi GRU encoder with hidden size of 64, a single layer GRU decoder with hidden size of 64, tied embedding layers with a hidden size of 64 and use a dropout rate of 0.2. ... For both the S2S+COPYTOK and S2S+COPYSPAN models we employ a 2-layer bi GRU as an encoder and a single layer GRU decoder. We use embeddings with 32 dimensions and GRUs with hidden units of size 128. ... Our models have a 2-layer bi-GRU encoder with a hidden size of 64, a single layer GRU decoder with hidden size of 64, tied embedding layer of size 64 and use a dropout rate of 0.2.