Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity
Authors: Yunsheng Bai, Hao Ding, Yang Qiao, Agustin Marinovic, Ken Gu, Ting Chen, Yizhou Sun, Wei Wang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on five real graph datasets show that UGRAPHEMB achieves competitive accuracy in the tasks of graph classification, similarity ranking, and graph visualization. |
| Researcher Affiliation | Academia | 1University of California, Los Angeles 2Purdue University |
| Pseudocode | No | The paper describes the proposed mechanism using equations and text but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about open-source code availability or links to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate the methods on five real graph datasets, PTC, IMDBM, WEB, NCI109, and REDDIT12K. |
| Dataset Splits | Yes | For each dataset, we split it into training, validation, and testing sets by 6:2:2 |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper discusses various methods and models like GIN and GRAPHSAGE but does not provide specific version numbers for any software dependencies or libraries used in the implementation. |
| Experiment Setup | No | The paper discusses parameter sensitivity regarding embedding dimension and training pair percentage but does not explicitly list specific hyperparameter values (e.g., learning rate, batch size, optimizer settings) or detailed training configurations required for reproducing the experiments. |