Learning to Approximate Adaptive Kernel Convolution on Graphs
Authors: Jaeyoon Sim, Sooyeon Jeon, InJun Choi, Guorong Wu, Won Hwa Kim
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model is tested on various standard datasets for node-wise classification for the state-of-the-art performance, and it is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification. |
| Researcher Affiliation | Academia | 1Pohang University of Science and Technology, Pohang, South Korea 2University of North Carolina at Chapel Hill, Chapel Hill, USA {simjy98, jsuyeon, surung9898, wonhwa}@postech.ac.kr, guorong wu@med.unc.edu |
| Pseudocode | No | The paper describes its methods using mathematical formulations and textual explanations but does not include structured pseudocode or algorithm blocks (e.g., labeled 'Algorithm' or 'Pseudocode'). |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, a link to a code repository (e.g., GitHub), or mention of code being available in supplementary materials for the described methodology. |
| Open Datasets | Yes | We conducted experiments on standard node classification datasets (in Table 1) that provide connected and undirected graphs. Cora, Citeseer and Pubmed (Sen et al. 2008) are constructed as citation networks, Amazon Computer and Amazon Photo (Shchur et al. 2018) define copurchase networks, and Coauthor CS (Shchur et al. 2018) is a co-authorship network. |
| Dataset Splits | Yes | For Cora, Citseer and Pubmed data, eighteen different baselines were used to compare the results for the node classification task as listed in Table 2. These standard benchmarks are provided with fixed split of 20 nodes per class for training, 500 nodes for validation and 1000 nodes for testing as in other literature (Kim et al. 2022; Wu et al. 2022b). For Amazon and Coauthor datasets, seven baselines are used as in Table 3. For others, the experiments were performed by randomly splitting the data as 60%/20%/20% for training/validation/testing datasets as in (Luo et al. 2022) and replicating it 10 times to obtain mean and standard deviation of the evaluation metric. |
| Hardware Specification | No | The paper discusses computation time and mentions processes that require processing power (e.g., 'eigendecomposition of ˆL'), but it does not provide specific details about the hardware used, such as GPU models, CPU types, or memory specifications for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or frameworks used in the implementation or experimentation (e.g., 'Python 3.8', 'PyTorch 1.9', 'TensorFlow 2.x'). |
| Experiment Setup | Yes | The weight Wk can be easily trained with backpropagation, and a multi-variate s across all nodes can be also trained given a gradient on scale s as s ßs Ls where ßs is a learning rate. |