Isometric Transformation Invariant and Equivariant Graph Convolutional Networks
Authors: Masanobu Horie, Naoki Morita, Toshiaki Hishinuma, Yu Ihara, Naoto Mitsume
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that the proposed model has a competitive performance compared to state-of-the-art methods on tasks related to geometrical and physical simulation data. Moreover, the proposed model can scale up to graphs with 1M vertices and conduct an inference faster than a conventional finite element analysis, which the existing equivariant models cannot achieve. ... To test the applicability of the proposed model, we composed the following two datasets: 1) a differential operator dataset of grid meshes; and 2) an anisotropic nonlinear heat equation dataset of meshes generated from CAD data. In this section, we discuss our machine learning model, the definition of the problem, and the results for each dataset. |
| Researcher Affiliation | Collaboration | Masanobu Horie University of Tsukuba, Research Institute for Computational Science Co. Ltd. horie@ricos.co.jp Naoki Morita University of Tsukuba, Research Institute for Computational Science Co. Ltd. morita@ricos.co.jp Toshiaki Hishinuma & Yu Ihara Research Institute for Computational Science Co. Ltd. {hishinuma,ihara}@ricos.co.jp Naoto Mitsume University of Tsukuba mitsume@kz.tsukuba.ac.jp |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The corresponding implementation and the dataset are available online1. 1https://github.com/yellowshippo/isogcn-iclr2021 |
| Open Datasets | Yes | The corresponding implementation and the dataset are available online1. 1https://github.com/yellowshippo/isogcn-iclr2021 |
| Dataset Splits | Yes | We generated 100 samples for each train, validation, and test dataset. ... Finally, we obtained 439 FEA results for the training dataset, 143 FEA results for the validation dataset, and 140 FEA results for the test dataset. |
| Hardware Specification | Yes | Each computation was run on the same GPU (NVIDIA Tesla V100 with 32 Gi B memory). ... each computation was run on the same CPU (Intel Xeon E5-2695 v2@2.40GHz) using one core |
| Software Dependencies | Yes | We implemented these models using Py Torch 1.6.0 (Paszke et al., 2019) and Py Torch Geometric 1.6.1 (Fey & Lenssen, 2019). |
| Experiment Setup | Yes | For each experiment, we minimized the mean squared loss using the Adam optimizer (Kingma & Ba, 2014). ... We used the tanh activation function as a nonlinear activation function... We stacked m (= 2, 5) layers for GCN, GIN, GCNII, and Cluster-GCN. ... The FEA time step t was set to 0.01. |