Cross-Domain Few-Shot Graph Classification
Authors: Kaveh Hassani6856-6864
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks. |
| Researcher Affiliation | Industry | Kaveh Hassani Autodesk AI Lab, Toronto, Canada kaveh.hassani@autodesk.com |
| Pseudocode | Yes | Algorithm 1: Training the proposed encoder with prototypical approach for one mini-batch of tasks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We collected all the datasets from TUDataset (Morris et al. 2020) and OGB (Hu et al. 2020a). |
| Dataset Splits | Yes | We introduce three new few-shot graph classification benchmarks with fixed meta train/val/test splits constructed from publicly available datasets. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We opted for a k-shot 2-way setting and split the few remaining multi-class datasets into binary datasets by sampling without replacement. We then randomly selected 20 and 50 samples per class as support and query sets, respectively. We report the mean classification accuracy with standard deviation over query samples of the meta-testing tasks after ten runs. |