Exploiting Mutual Information for Substructure-aware Graph Representation Learning
Authors: Pengyang Wang, Yanjie Fu, Yuanchun Zhou, Kunpeng Liu, Xiaolin Li, Kien Hua
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we present extensive experimental results to demonstrate the improved performances of our method with real-world data. |
| Researcher Affiliation | Academia | Pengyang Wang1 , Yanjie Fu1 , Yuanchun Zhou2 , Kunpeng Liu1 , Xiaolin Li3 and Kien Hua1 1University of Central Florida 2Computer Network Information Center, Chinese Academy of Sciences 3Nanjing University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We evaluate the performance over two real-world check-in datasets [Yang et al., 2014] of New York and Tokyo. |
| Dataset Splits | Yes | We conduct 10-fold cross validation and report the average Accuracy@N. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions various models and networks (e.g., GCN, GAE, DGI) but does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | In the experiment, we set the number of GCN layer = 2, the input feature size=100, the output feature size = 40, learning rate = 0.001. and L = λr Lr + λj Lj + λs Ls |