Unsupervised Episode Generation for Graph Meta-learning

Authors: Jihyeong Jung, Sangwoo Seo, Sungwon Kim, Chanyoung Park

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate the effectiveness of our proposed unsupervised episode generation method for graph meta-learning towards the FSNC task. Our code is available at: https: //github.com/Jhng Jng/Na Q-Py Torch.
Researcher Affiliation Academia Jihyeong Jung 1 Sangwoo Seo 1 Sungwon Kim 2 Chanyoung Park 1 2 1Department of Industrial & Systems Engineering, KAIST 2Graduate School of Data Science, KAIST. Correspondence to: Chanyoung Park <cy.park@kaist.ac.kr>.
Pseudocode Yes Algorithm 1 Training Meta-learner Meta( ; θ)
Open Source Code Yes Our code is available at: https: //github.com/Jhng Jng/Na Q-Py Torch.
Open Datasets Yes We use five benchmark datasets that are widely used in FSNC to comprehensively evaluate the performance of our unsupervised episode generation method: 1) Two product networks (Amazon-Clothing, Amazon-Electronics (Mc Auley et al., 2015)), 2) three citation networks (Cora-Full (Bojchevski & G unnemann, 2018), DBLP (Tang et al., 2008)) in addition to a large-scale dataset ogbn-arxiv (Hu et al., 2020).
Dataset Splits Yes For Amazon Clothing, as the validation set contains 17 classes, evaluations on 20-way cannot be conducted. Instead, the evaluation is done in 5/10-way 1/5-shot settings, i.e., four settings in total. In the validation and testing phases, we sampled 50 validation tasks and 500 testing tasks for all settings with 8 queries each.
Hardware Specification Yes OOM: Out Of Memory on NVIDIA RTX A6000
Software Dependencies No The paper mentions software components like 'Adam (Kingma & Ba, 2015) optimizer' and '2-layer GCN (Kipf & Welling, 2017)' but does not provide specific version numbers for these or other libraries/frameworks used for implementation.
Experiment Setup Yes For each dataset except for Amazon-Clothing, we evaluate the performance of the models in 5/10/20-way, 1/5-shot settings, i.e., six settings in total. [...] In the validation and testing phases, we sampled 50 validation tasks and 500 testing tasks for all settings with 8 queries each. and Table 8. Tuned hyperparameters and their range by baselines MAML-like (MAML, G-Meta) Inner step learning rate {0.01, 0.05, 0.1, 0.3, 0.5}, # of inner updates {1, 2, 5, 10, 20}, Meta-learning rate {0.001, 0.003} Proto Net-like (Proto Net, TENT) Learning rate {5e-5, 1e-4, 3e-4, 5e-4, 1e-3, 3e-3, 5e-3}