How hard is to distinguish graphs with graph neural networks?
Authors: Andreas Loukas
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An empirical study involving 12 graph classification tasks and 420 networks reveals strong alignment between actual performance and theoretical predictions. |
| Researcher Affiliation | Academia | Andreas Loukas École Polytechnique Fédérale Lausanne andreas.loukas@epfl.ch |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing code or a link to a code repository. |
| Open Datasets | No | The paper describes how the datasets were constructed, using 'geng [48]' to populate subgraphs, but it does not provide concrete access information (link, DOI, repository) for the resulting datasets themselves. |
| Dataset Splits | Yes | These were split into a training, a validation, and a test set (covering 90%, 5%, and 5% of the dataset, respectively). |
| Hardware Specification | No | The paper describes the experimental setup in terms of network architecture and training parameters, but it does not specify any hardware details like GPU models, CPU types, or cloud resources used for the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam' for training and 'GIN0' layers, but it does not provide specific version numbers for any software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | Their depth and width varied in d (2, 3, 4, 5, 6, 7, 8) and w (1, 2, 4, 8, 16), respectively, the message-size was set equal to w, and no global state was used. Each network was trained using Adam with a decaying learning rate. Early stopping was employed when the validation accuracy reached 100%. |