CooBa: Cross-project Bug Localization via Adversarial Transfer Learning

Authors: Ziye Zhu, Yun Li, Hanghang Tong, Yu Wang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four large-scale real-world data sets demonstrate that the proposed COOBA significantly outperforms the state of the art techniques.4 Experimental Results and Analysis
Researcher Affiliation Academia 1Department of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, China 2Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It does not contain an explicit code release statement or a specific repository link.
Open Datasets Yes We evaluate our method on the data sets provided by Ye et al. [2014]. The data sets contain the bug reports, source code links, buggy files, API documentation, and the oracle of bug-to-file mappings, which are all publicly available1. Four open-source projects are collected in these data sets as follows, Aspect J: an aspect-oriented programming extension for Java programming language. SWT: an open source widget toolkit for Java. JDT: a suite of Java development tools for Eclipse. Eclipse Platform UI: a user interface of a development platform for Eclipse. 1http://dx.doi.org/10.6084/m9.figshare.951967
Dataset Splits Yes For bug localization in cross-project context, we suppose the training data includes all the fixed bug reports of source project and 20% fixed bug reports of target project. The remaining 80% bug reports of target project are used for testing. We repeat this experiment for 10 times, and 10 cross-validation is used in the experiment.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions several models and optimizers (GloVe, Bi-LSTMs, GCN, CNN, Adam) but does not provide specific software package names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper mentions the existence of hyperparameters like 'τ' and 'λ' and the use of the Adam optimizer, but it does not provide concrete values for hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations.