Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification
Authors: Yihong Luo, Yuhan Chen, Siya Qiu, Yiwei Wang, Chen Zhang, Yan Zhou, Xiaochun Cao, Jing Tang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our proposed algorithm outperforms the standard SAM with lower computational costs in FSNC tasks. |
| Researcher Affiliation | Collaboration | 1 The Hong Kong University of Science and Technology 2 The Hong Kong University of Science and Technology (Guangzhou) 3 School of Computer Science and Engineering, Sun Yat-sen University 4 University of California, Merced 5 University of California, Los Angeles 6 Createlink Technology 7 School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University |
| Pseudocode | Yes | Algorithm 1 Training with FGSAM and FGSAM+. |
| Open Source Code | Yes | The code is available at https://github.com/draym28/FGSAM_NeurIPS24 |
| Open Datasets | Yes | We conduct evaluations on three widely used real-world benchmark node classification datasets: Cora Full [5], DBLP and ogbn-ar Xiv [18] |
| Dataset Splits | Yes | we use the train/val/test split as in [34] and [24]. We further split Cbase into two disjoint class set: training class set Ctr and validation class set Cval, such that Cbase = Ctr Cval and Ctr Cval = Ø. Overall, we use Ctr and Cval for train and validation in the meta-training stage, respectively, and use Cnovel for meta-test. We split C into Ctr, Cval and Cnovel according to the class split ratio in Tab. 5. |
| Hardware Specification | Yes | We implement our model by Py Torch [29] and conduct experiments on an RTX-3090Ti. |
| Software Dependencies | No | We implement our model by Py Torch [29] and conduct experiments on an RTX-3090Ti. We use Optuna [2] to search the hyper-parameters for each setting. |
| Experiment Setup | Yes | We use Optuna [2] to search the hyper-parameters for each setting. See Appendix D.2 for detailed FSNC learning protocol. Table 6: Hyper-parameters Search Space. |