Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Asymmetric Learning for Spectral Graph Neural Networks

Authors: Fangbing Liu, Qing Wang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on eighteen benchmark datasets show that asymmetric learning consistently improves the performance of spectral GNNs for both heterophilic and homophilic graphs. This improvement is especially notable for heterophilic graphs, where the optimization process is generally more complex than for homophilic graphs.
Researcher Affiliation Academia Graph Research Lab, School of Computing, Australian National University EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Asymmetric Training
Open Source Code Yes Code https://github.com/Mia-321/asym-opt.git
Open Datasets Yes Extensive experiments on eighteen benchmark datasets for node classification tasks, including: six small heterophilic graphs (Texas, Wisconsin, Actor, Chameleon, Squirrel, Cornell), five large heterophilic graphs (Roman Empire, Amazon Ratings, Minesweeper, Tolokers, Questions) and seven homophilic graphs (Citeseer, Pubmed, Cora, Computers, Photo, Coauthor-CS, Coauthor-Physics).
Dataset Splits Yes For each dataset, a sparse splitting is employed where nodes are randomly partitioned into train/validation/test sets with ratios of 2.5%/2.5%/95%, respectively. For Citeseer, Pubmed, and Cora datasets, the setting from (Chien et al. 2021; He, Wei, and Wen 2022) is used: 20 nodes per class are for training, 500 nodes for validation, and 1,000 nodes for testing.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory specifications used for running its experiments.
Software Dependencies No The paper mentions using "Adam optimizer" but does not provide specific version numbers for any key software libraries (e.g., PyTorch, TensorFlow) or programming languages.
Experiment Setup Yes Input : Training set S = (X, {yi}m i=1), validation set Dval = (X, {yi}m+q i=m+1), loss function ℓ, learning rate η, maximum iteration number tmax, and optimizer OP( ) Q4: How do the exponential decay parameters βπΘ and βπW affect the performance of asymmetric learning? We explore the impact of the hyperparameters βπΘ and βπW on test accuracy when using Cheb Net for node classification tasks. The parameters βπΘ and βπW increase from 0 to 0.9 in steps of 0.1, with an additional value at 0.99.