HGPrompt: Bridging Homogeneous and Heterogeneous Graphs for Few-Shot Prompt Learning
Authors: Xingtong Yu, Yuan Fang, Zemin Liu, Xinming Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we thoroughly evaluate and analyze HGPROMPT through extensive experiments on three public datasets. |
| Researcher Affiliation | Academia | 1University of Science and Technology of China, China 2 Singapore Management University, Singapore 3 National University of Singapore, Singapore |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are provided in the main text, only mathematical formulas and descriptions of the framework components. |
| Open Source Code | No | The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on three benchmark datasets. (1) ACM serves as a citation network... (2) DBLP serves as an all-encompassing bibliographic database... (3) Freebase (Bollacker et al. 2008)... For all these datasets, we employ the same raw data as Simple-HGN (Lv et al. 2021). We provide a summary of these datasets in Table 1. |
| Dataset Splits | Yes | we randomly generate 100 one-shot tasks for model training and validation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (CPU, GPU models, memory, etc.) used for running the experiments. |
| Software Dependencies | No | We adopt Micro F and Macro F (Pedregosa et al. 2011; Lv et al. 2021) as the evaluation metrics. |
| Experiment Setup | Yes | For hyperparameter settings and other implementation details about the baselines and HGPROMPT, see Appendix C. |