Unsupervised Deep Graph Structure and Embedding Learning
Authors: Xiaobo Shen, Lei Shi, Xiuwen Gong, Shirui Pan
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments on node classification and clustering demonstrate the effectiveness of the proposed method over the state-of-the-arts especially under heavy perturbation. |
| Researcher Affiliation | Academia | Xiaobo Shen1 , Lei Shi1 , Xiuwen Gong2 and Shirui Pan3 1Nanjing University of Science and Technology 2University of Technology Sydney 3Griffith University |
| Pseudocode | Yes | Algorithm 1 UPGNN Input: Adjacency matrix A, Attribute matrix X, parameters α, β, γ, λ, τ, learning rate η, η ; Output: adjacency matrix S, network parameters Θ, Ω. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | Three benchmark datasets, i.e., Cora, Citeseer, Polblogs are employed for evaluation, and the largest connected component is considered for each graph. |
| Dataset Splits | Yes | For each graph, we randomly choose 10% of nodes for training, 10% of nodes for validation, and the remaining 80% of nodes for testing. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | For the proposed method, α, β, γ, and λ are searched in a wide range. For GNNs, we set dimensions of hidden and output layers to 512 and 64 respectively. For the baselines, we use official codes or borrow results reported in their papers. UPGNN can achieve the good performance when α, β, γ, and λ are set to 2 10 3, 2.5, 2 3, and 4 10 3 respectively. |