DeepAM: Migrate APIs with Multi-modal Sequence to Sequence Learning
Authors: Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, Sunghun Kim
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results indicate that DEEPAM significantly increases the accuracy of API mappings as well as the number of API mappings when compared with the state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | 1The Hong Kong University of Science and Technology, Hong Kong, China 2The University of Newcastle, Callaghan, Australia 3Microsoft Research, Beijing, China |
| Pseudocode | No | The paper does not contain any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to source code or an explicit statement about its public availability. |
| Open Datasets | No | The paper mentions collecting data from Git Hub ('We download Java and C# projects created from 2008 to 2014 from Git Hub') but does not provide a specific link or formal citation for the processed dataset used for training, or state its public availability. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits with specific percentages or counts. It only mentions how training batches are constituted ('Each batch is constituted with 100 Java pairs and 100 C# pairs that are randomly selected from corresponding datasets.'). |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions software like Eclipse's JDT compiler, Roslyn, and Lucene but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We set the batch size as 200. Each batch is constituted with 100 Java pairs and 100 C# pairs that are randomly selected from corresponding datasets. The vocabulary sizes of both APIs and natural language descriptions are set to 10,000. The maximum sequence lengths La and Ld are both set as 30. Sequences that exceed the maximum lengths will be excluded for training. Both GRUs have two hidden layers, each with 1000 hidden units. |