Information Augmentation for Few-shot Node Classification
Authors: Zongqian Wu, Peng Zhou, Guoqiu Wen, Yingying Wan, Junbo Ma, Debo Cheng, Xiaofeng Zhu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results show the effectiveness and the efficiency of our proposed method, compared to state-of-the-art methods, in terms of different classification tasks. |
| Researcher Affiliation | Academia | 1Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China 2School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China |
| Pseudocode | No | The paper describes its methods using prose and mathematical equations, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In our experiments, we use four public real-world datasets, including two citation datasets (i.e., Cora and Citeseer [Kipf and Welling, 2016]), one KDD challenge dataset, i.e., Coauthor (CS) [Shchur et al., 2018], and one ecommerce dataset (i.e., Computers [Shchur et al., 2018]). |
| Dataset Splits | No | The paper states that the model is trained on the 'support set' and evaluated on the 'query set', and describes splitting data into 'base classes' and 'novel classes'. However, it does not explicitly describe a separate 'validation' dataset split with percentages, counts, or specific methodology for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We investigate the parameter sensitivity of our IA-FSNC, i.e., top k for selecting the pseudo-label set and µ for the learning rate. ... Therefore, we experimentally set k = 50 and µ = 0.02 in our experiments. |