AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs
Authors: Shengrui Li, Xueting Han, Jing Bai
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Adapter GNN achieves higher performance than other PEFT methods and is the only one consistently surpassing full fine-tuning (outperforming it by 1.6% and 5.7% in the chemistry and biology domains respectively, with only 5% and 4% of its parameters tuned) with lower generalization gaps. Experimental setup. We evaluate the effectiveness of Adapter GNN by conducting extensive graph-level classification experiments on eight molecular datasets and one biology dataset. |
| Researcher Affiliation | Collaboration | Shengrui Li1,2*, Xueting Han2 , Jing Bai2 1Tsinghua University 2Microsoft Research Asia lsr22@mails.tsinghua.edu.cn, {chrihan, jbai}@microsoft.com |
| Pseudocode | No | The paper does not include a figure, block, or section explicitly labeled "Pseudocode" or "Algorithm", nor does it present structured steps for a method or procedure formatted like code or an algorithm. |
| Open Source Code | Yes | Our code is available at https://github.com/Lucius-lsr/Adapter GNN. |
| Open Datasets | No | The paper mentions using "eight molecular datasets and one biology dataset" and states "Details of datasets and pre-trained models are in Appendix D.3 and D.4." However, it does not provide concrete access information (e.g., direct URLs, DOIs, specific repository names, or full citations with authors/year) for these datasets in the main text. |
| Dataset Splits | No | The paper does not explicitly provide specific percentages or sample counts for training, validation, or test splits. It mentions "Training data Dn" and "number of training samples n" in theoretical discussions, but not concrete details for experimental reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or cloud instance specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide a reproducible description of ancillary software, specifically lacking version numbers for key software components or libraries used in the experiments. |
| Experiment Setup | No | The paper has an "Experimental setup" section which states, "We evaluate the effectiveness of Adapter GNN by conducting extensive graph-level classification experiments on eight molecular datasets and one biology dataset. We employ prevalent pre-training methods, all based on a GIN backbone." It then defers further details by stating, "Details of datasets and pre-trained models are in Appendix D.3 and D.4. Implementations can be found in Appendix D.1." As the specific hyperparameter values, training configurations, or system-level settings are deferred to appendices and not explicitly stated in the main text, this criterion is not met. |