Neural Attribution for Semantic Bug-Localization in Student Programs
Authors: Rahul Gupta, Aditya Kanade, Shirish Shevade
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that Neural Bug Locator is generally more accurate than two state-of-the-art program-spectrum based and one syntactic difference based bug-localization baselines. Our experiments demonstrate that NBL is more accurate than them in most cases. |
| Researcher Affiliation | Collaboration | Rahul Gupta1 Aditya Kanade1,2 Shirish Shevade1 1Department of Computer Science and Automation, Indian Institute of Science, Bangalore, KA 560012, India 2Google Brain, CA, USA |
| Pseudocode | No | The paper describes the technical details and phases of the bug-localization approach in prose but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 4. We provide both the dataset and the implementation of NBL online at https://bitbucket. org/iiscseal/nbl/. |
| Open Datasets | Yes | 4. We provide both the dataset and the implementation of NBL online at https://bitbucket. org/iiscseal/nbl/. |
| Dataset Splits | Yes | Pairing these programs with their corresponding test IDs results in a dataset with around 270K examples. We set aside 5% of this dataset for validation, and use the rest for training. |
| Hardware Specification | Yes | We train our model for 50 epochs, which takes about one hour on an Intel(R) Xeon(R) Gold 6126 machine, clocked at 2.60GHz with 64GB of RAM and equipped with an NVIDIA Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions using 'Keras [7] using Tensor Flow [1] as back-end' and 'pycparser [5]', but it does not specify version numbers for these software components, which is required for reproducibility. |
| Experiment Setup | Yes | We train our model using the Adam optimizer [17], with a learning rate of 0.0001. We train our model for 50 epochs |