Dual Low-Rank Graph Autoencoder for Semantic and Topological Networks
Authors: Zhaoliang Chen, Zhihao Wu, Shiping Wang, Wenzhong Guo
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the proposed model with state-of-the-art methods on several datasets, which demonstrates the superior accuracy of DLR-GAE in semi-supervised classification. and Experimental Analysis Datasets and Compared Methods In order to validate the effectiveness of DLR-GAE, we utilize several widely used graph datasets for the performance evaluation, including Citeseer, Cora Full, Blog Catalog, ACM, Flickr and UAI. |
| Researcher Affiliation | Academia | 1 College of Computer and Data Science, Fuzhou University, Fuzhou, China 2 Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fuzhou University, Fuzhou, China |
| Pseudocode | Yes | Algorithm 1: Dual Low-Rank Graph Auto Encoder Input: Node features X Rn m, topological adjacency matrix AT Rn n, semantic adjacency matrix AS Rn n, numbers of factor matrices I, ground truth Y Rn c, hyperparameters α and γ. Output: Node embedding Z. |
| Open Source Code | No | The paper does not include any explicit statement about providing access to source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In order to validate the effectiveness of DLR-GAE, we utilize several widely used graph datasets for the performance evaluation, including Citeseer, Cora Full, Blog Catalog, ACM, Flickr and UAI. These datasets describe distinct types of node connections, e.g., paper citations, social relationships and web linkages. A statistical summary of these datasets is illustrated in Table 1. |
| Dataset Splits | Yes | in the following experiments, we shuffle datasets and randomly select 20 labeled samples per class for training, 500 samples for validation and 1,000 samples for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Learning rates of all compared frameworks are fixed as 0.01. For all GNN-based methods, the number of hidden units at each layer is fixed as 16 and a 2-layer GCN is adopted. As for DLR-GAE, we keep consistent with compared GCN-based models, setting the number of hidden units as 16 and applying parallel two-layer GCNs. The learning rate of DLR-GAE is also fixed as 0.01 and the weight decay is 5 10 4. The choice of k ranges in [5, 10, , 50] when constructing semantic graphs via KNN algorithm. The number of latent factors I is fixed as 5. |