Learning Graph Representation via Graph Entropy Maximization
Authors: Ziheng Sun, Xudong Wang, Chris Ding, Jicong Fan
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the effectiveness of our method in comparison to many baselines in unsupervised learning and semi-supervised learning tasks. |
| Researcher Affiliation | Academia | 1School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China 2Shenzhen International Center for Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data, Shenzhen, China. |
| Pseudocode | Yes | Algorithm 1 Optimization for Ge Max (11) and (12) |
| Open Source Code | Yes | The code of our method is available at https://github.com/MathAdventurer/GeMax. |
| Open Datasets | Yes | In this section, we evaluate the effectiveness of our Ge Max method in graph learning tasks including unsupervised and semi-supervised representation learning, on TUdataset (Morris et al., 2020). Table 1: Statistics of TUdataset (Morris et al., 2020) |
| Dataset Splits | Yes | Each time, we performed 10-fold cross-validation on each dataset. In each fold, we use 90% of the total data as unlabeled data for contrastive pre-training and 10% as labeled testing data. For each fold, we use 80% of the total data as the unlabeled data, 10% as labeled training data, and 10% as labeled testing data. |
| Hardware Specification | Yes | We run experiments on a server with Intel 7 CPU and RTX 3090 GPUs. |
| Software Dependencies | No | The paper provides code snippets in Appendix D.10 that use PyTorch functions (e.g., torch.tensor, torch.matmul, torch.norm), implying its use, but it does not specify version numbers for PyTorch or any other software libraries required for reproduction. |
| Experiment Setup | Yes | The model is trained with a batch size of 128 and a learning rate of 0.001. For those contrastive learning methods (e.g., JOJOv2 and Auto GCL), they use 30 epochs of contrastive pre-training under the naive strategy. We set µ = γ = 0.5. |