Convergent Graph Solvers
Authors: Junyoung Park, Jinhyun Choo, Jinkyoo Park
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of CGS by applying it to various network-analytic and graph benchmark problems. The results indicate that CGS has competitive capabilities for predicting the stationary properties of graph systems, irrespective of whether the target systems are linear or non-linear. CGS also shows high performance for graph classification problems where the existence or the meaning of a fixed point is hard to be clearly defined, which highlights the potential of CGS as a general graph neural network architecture. |
| Researcher Affiliation | Academia | Junyoung Park, Jinhyun Choo & Jinkyoo Park KAIST, Daejeon, South Korea {junyoungpark,jinhyun.choo,jinkyoo.park}@kaist.ac.kr |
| Pseudocode | Yes | B SOFTWARE IMPLEMENTATION In this section, we provide a Pytorch style pseudocode of CGS which computes the derivatives via the backward fixed point iteration. Listing 1: CGS pseudocode |
| Open Source Code | Yes | 1The code is available at https://github.com/Junyoungpark/CGS. |
| Open Datasets | Yes | We assess the graph classification performance of CGS on six graph classification benchmarks: two social-network datasets (IMDB-Binary, IMDB-Multi) and four bioinformatics datasets (MUTAG, PROTEINS, PTC, NCI1). [...] Across all the benchmark dataset, we use the dataset implementation of DGL (Wang et al., 2019) and cross-validation indices generated with Scipy (Virtanen et al., 2020). |
| Dataset Splits | Yes | We perform 10-fold cross validation and report the average and standard deviation of its accuracy for each validation fold, following the evaluation scheme of Niepert et al. (2016). |
| Hardware Specification | Yes | We run all experiments on a single desktop equipped with a NVIDIA Titan X GPU and AMD Threadripper 2990WX CPU. |
| Software Dependencies | No | The paper mentions several software components like 'Pytorch (Paszke et al., 2019)', 'DGL (Wang et al., 2019)', and 'Scipy (Virtanen et al., 2020)'. However, it only provides the publication year for these software packages/libraries, not specific version numbers (e.g., PyTorch 1.9, DGL 0.2.0, SciPy 1.0) which are crucial for reproducibility. |
| Experiment Setup | Yes | We train all models with the Adam optimizer (Kingma & Ba, 2014), whose learning rate is initialized as 0.001 and scheduled by the cosine annealing method (Loshchilov & Hutter, 2016). The loss function is the mean-squared error (MSE) between the model predictions and the ground truth pressures. [...] We used 32 training graphs per gradient update. [...] We train 1000 gradient steps for all models. [...] We set the contraction factor γ as 0.5. We train all models with the Adam optimizer whose learning rate is initialized as 0.001 and scheduled by the cosine annealing method for 500 (100 for NCI1 dataset due to the large datset size) epochs with 128 mini-batch size. We set the random seed of Scipy, Pytorch Paszke et al. (2019), and DGL as 0. |