Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning

Authors: Jianqing Liang, Xinkai Wei, Min Chen, Zhiqiang Wang, Jiye Liang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets demonstrate state-of-the-art empirical performance.
Researcher Affiliation Academia Jianqing Liang, Xinkai Wei, Min Chen, Zhiqiang Wang, Jiye Liang* Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: GTCA Input: The adjacency matrix A, the feature matrix X, and the number of training epochs J Output: Feature matrix H 1: for epoch in 1 to J do 2: Generate GNN embeddings Hθ and Node Former embeddings Hφ with GNN encoder fθ , Node Former encoder gφ, adjacency matrix A and feature matrix X; 3: Generate GNN k-NN node set Bθ i , Nodeformer k-NN node set Bφ i and topological k-NN set Ti with Hθ , Hφ and adjacency matrix A; 4: Calculate positive pairs Pi and negative pairs Ni, i = 1, , N with Equation (5) and Equation (6); 5: Compute loss L with Equation (11); 6: Apply gradient descent to minimize L and update parameters; 7: end for 8: Calculate the final output feature matrix with Equation (12); 9: teturn H for downstream tasks;
Open Source Code Yes Code https://github.com/a-hou/GTCA
Open Datasets Yes To validate the effectiveness of the GTCA method, we perform extensive experiments on 5 benchmark datasets for node classification including a commonly used citation network, i.e., Cora (Sen et al. 2008), a reference network constructed based on Wikipedia, i.e., Wiki-CS (Mernyei and Cangea 2020), a co-authorship network, i.e., Coauthor CS (Shchur et al. 2018), and two product co-purchase networks, i.e., Amazon-Computers and Amazon-Photo (Shchur et al. 2018).
Dataset Splits Yes For Cora dataset, we follow (Yang, Cohen, and Salakhudinov 2016) to randomly select 20 nodes per class for training, 500 nodes for validation, and the remaining nodes for testing. For Wiki-CS, Coauthor-CS, Amazon-Computers and Amazon-Photo datasets, we follow (Liu, Gao, and Ji 2020) to randomly select 20 nodes per class for training, 30 nodes per class for validation, and the remaining nodes for testing.
Hardware Specification Yes All experiments are implemented in Py Torch and conducted on a server with NVIDIA Ge Force 3090 (24GB memory each).
Software Dependencies No The paper mentions PyTorch but does not specify its version number. Text: All experiments are implemented in Py Torch and conducted on a server with NVIDIA Ge Force 3090 (24GB memory each).
Experiment Setup Yes Table 2: Hyperparameter settings of GTCA on 5 datasets. lr is the learning rate. Datasets k E λ lr Cora 520 440 0.7 0.005 Wiki-CS 500 400 0.8 0.001 Coauthor-CS 240 420 0.4 0.001 Amazon-Computers 550 512 0.8 0.001 Amazon-Photo 510 512 0.7 0.001