Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
Authors: Xin He, Yili Wang, Wenqi Fan, Xu Shen, Xin Juan, Rui Miao, Xin Wang
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on benchmark datasets, we demonstrate that Mba GCN paves the way for future advancements in graph neural network research. In this section, we conduct a series of experiments to evaluate Mba GCN s performance, comparing it with several widely used GNN architectures. |
| Researcher Affiliation | Academia | Xin He1 , Yili Wang1 , Wenqi Fan2 , Xu Shen1 , Xin Juan1 , Rui Miao1 and Xin Wang1 1Jilin University 2The Hong Kong Polytechnic University EMAIL, EMAIL, EMAIL, EMAIL, |
| Pseudocode | Yes | Algorithm 1 Mamba-based Graph Convolution Network Input: Adjacency matrix A RN N, feature matrix X RN d, state matrix P, learnable parameters WQ, WR, W , W1, W2. Output: The updated node representations Y. 1: Compute Q, R and via Eq.5; 2: while not convergent do 3: for l = 1 L do 4: Compute Hl via Eq.4; 5: Compute Hl and Yl via Eq.8; 6: Compute S1 l and S2 l via Eq.9; 7: Modify the adjacency matrix Al via Eq.10; 8: end for 9: Obtain node representations Y; 10: Update all learnable parameters via back propagation; 11: end while 12: return Updated node representations Y. |
| Open Source Code | Yes | Our code is in https://github.com/hexin5515/Mba GCN. |
| Open Datasets | Yes | Datasets: We evaluate our method on a variety of datasets across different domains, focusing on full-supervised node classification tasks. The datasets include three citation graph datasets (Cora, Citeseer, Pubmed), two web graph datasets (Computers, Photo), and two heterogeneous graph datasets (Actor, Wisconsin). For citation and heterogeneous graph datasets, we use the feature vectors, class labels, and 10 random splits as proposed in [Chen et al., 2020]. For the web graph datasets, the same components are used following the protocol in [He et al., 2021]. |
| Dataset Splits | Yes | For citation and heterogeneous graph datasets, we use the feature vectors, class labels, and 10 random splits as proposed in [Chen et al., 2020]. For the web graph datasets, the same components are used following the protocol in [He et al., 2021]. |
| Hardware Specification | Yes | All experiments are performed on a system with an Intel(R) Xeon(R) Gold 5120 CPU and an NVIDIA L40 48G GPU. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned in the paper. |
| Experiment Setup | No | The paper does not provide specific details on hyperparameters such as learning rate, batch size, optimizer settings, or explicit training schedules. It mentions layer configurations but not other setup details. |