Graph Few-shot Learning with Task-specific Structures

Authors: Song Wang, Chen Chen, Jundong Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further conduct extensive experiments on five node classification datasets under both singleand multiple-graph settings to validate the superiority of our framework over the state-of-the-art baselines.
Researcher Affiliation Academia Song Wang University of Virginia sw3wv@virginia.edu Chen Chen University of Virginia zrh6du@virginia.edu Jundong Li University of Virginia jundong@virginia.edu
Pseudocode No The paper describes its methods using prose and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Our code is provided at https://github.com/Song W-SW/GLITTER.
Open Datasets Yes To evaluate the performance of GLITTER on the few-shot node classification task, we conduct experiments on five prevalent real-world graph datasets, two for the multiple-graph setting and three for the single-graph setting. The detailed statistics are provided in Table 1. For the multiple-graph setting, we use the following two datasets: (1) Tissue-PPI [49] consists of 24 protein-protein interaction (PPI) networks... (2) Fold-PPI [18] contains of 144 tissue PPI networks... For the single-graph setting, we leverage three datasets: (1) DBLP [30] is a citation network... (2) Cora-full [2] is also a citation network... (3) ogbn-arxiv [17] is a citation network...
Dataset Splits Yes Specifically, Ti = {Si, Qi}, where Si is the support set of Ti and consists of K labeled nodes for each of N classes (i.e., |Si| = NK). The corresponding label set of Ti is Yi, where |Yi| = N. Yi is sampled from the whole training label set Ytrain. With Si as references, the model is required to classify nodes in the query set Qi, which contains Q unlabeled samples. After training, the model will be evaluated on a series of meta-test tasks, which follow a similar setting as meta-training tasks. The training process of our framework is conducted on a series of meta-training tasks. After training, the model will be evaluated on a specific number of meta-test tasks.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'PyTorch' in its references but does not specify version numbers for any software dependencies used in its implementation.
Experiment Setup Yes Specifically, for each meta-task, we first perform several update steps to learn task-specific structures based on the two structure losses: θ(i) S = θ(i 1) S α L(i) S , where θS denotes the parameters used to learn task-specific structures, and L(i) S = L(i) N +L(i) M . θ(i) G = θ(i 1) G α L(i) support, where L(i) support = P j yk,j log p(i) k,j is the cross-entropy loss, and α is the base learning rate. After repeating these steps for η times, the loss on the query set will be used for the meta-update: θG = θ(η) G β1 L(η) query and θS = θ(η) S β2 L(η) S , where β1 and β2 are the meta-learning rates, and L(η) query is the cross-entropy loss calculated on query nodes.