Multi-view Unsupervised Graph Representation Learning
Authors: Jiangzhang Gan, Rongyao Hu, Mengmeng Zhan, Yujie Mo, Yingying Wan, Xiaofeng Zhu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results verify the effectiveness of our proposed method, compared to state-of-the-art methods. Extensive experiments on benchmark data sets clearly demonstrate that our method outperforms the state-of-the-art methods on different downstream tasks. |
| Researcher Affiliation | Academia | 1Center for Future Media and School of Computer Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China 2School of Mathematical and Computational Science, Massey University Auckland Campus, Auckland 0632, New Zealand 3Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518000, China |
| Pseudocode | Yes | We list the pseudo of our method in Appendix. |
| Open Source Code | No | The paper does not explicitly state that source code is provided or give a link to a repository for the described methodology. |
| Open Datasets | Yes | The used data sets include citation networks data (i.e., Citeseer, Cora), reference network data (i.e., Wiki-CS), and networks data (i.e., Computers). |
| Dataset Splits | No | The paper lists the datasets used (Citeseer, Cora, Wiki-CS, Computers) but does not provide explicit details about train/validation/test splits, percentages, or sample counts. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers). |
| Experiment Setup | No | The paper mentions tuning parameters λ1, λ2, and η, and states that 'Eq. (17) can be optimized by the standard gradient descent algorithm.' However, it does not provide specific hyperparameter values such as learning rate, batch size, number of epochs, or the specific optimizer used for the main experimental results. |