Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
LOHA: Direct Graph Spectral Contrastive Learning Between Low-Pass and High-Pass Views
Authors: Ziyun Zou, Yinghui Jiang, Lian Shen, Juan Liu, Xiangrong Liu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present a comprehensive evaluation of LOHA, through a series of node classification experiments on 9 real-world datasets. Comparison between LOHA with other baselines and ablation studies validate the effectiveness of LOHA and help us to gain further insights. |
| Researcher Affiliation | Academia | Ziyun Zou1, Yinghui Jiang2, Lian Shen1, Juan Liu3, Xiangrong Liu1,2* 1Department of Computer Science and Technology, Xiamen University, 2 National Institute for Data Science in Health and Medicine, Xiamen University, 3Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods through mathematical equations and textual descriptions, but does not contain a dedicated pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement regarding the release of source code or a link to a code repository. |
| Open Datasets | Yes | We choose widely used real-world datasets with different homophily levels to evaluate the performance of LOHA. (1) Homophilic Graphs: Cora, Citeseer, and Pub Med from (Yang, Cohen, and Salakhudinov 2016). (2) Heterophilic Graphs: Cornell, Texas, Actor and Wisconsin from (Pei et al. 2020); Chameleon from (Rozemberczki, Allen, and Sarkar 2021); Amazon-ratings(Amazon for short in tables) from (Platonov et al. 2023). |
| Dataset Splits | Yes | We follow the training and validation strategies as (Chien et al. 2021), where nodes are randomly split into 60%, 20%, and 20%. All comparative methods share the same fixed random splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | Output embedding size and hyper-parameters in stage 2 are also fixed for fair comparison. More detailed settings can be found in Appendix. |