E(n) Equivariant Graph Neural Networks
Authors: Vı́ctor Garcia Satorras, Emiel Hoogeboom, Max Welling
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties. We evaluate our method in modelling dynamical systems, representation learning in graph autoencoders and predicting molecular properties in the QM9 dataset. Our method reports the best or very competitive performance in all three experiments. |
| Researcher Affiliation | Academia | Victor Garcia Satorras 1 Emiel Hoogeboom 1 Max Welling 1 1Uv A-Bosch Delta Lab, University of Amsterdam, Netherlands. Correspondence to: Victor Garcia Satorras <v.garciasatorras@uva.nl>, Emiel Hoogeboom <e.hoogeboom@uva.nl>, Max Welling <m.welling@uva.nl>. |
| Pseudocode | No | The paper describes mathematical equations for its model (equations 4, 5, 6, 7) but does not include structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of its source code or a direct link to a code repository. |
| Open Datasets | Yes | The QM9 dataset (Ramakrishnan et al., 2014) has become a standard in machine learning as a chemical property prediction task. We imported the dataset partitions from (Anderson et al., 2019). |
| Dataset Splits | Yes | Dataset: We sampled 3.000 trajectories for training, 2.000 for validation and 2.000 for testing. Each trajectory has a duration of 1.000 timesteps. We sampled 5.000 graphs for training, 500 for validation and 500 for test for both datasets. We imported the dataset partitions from (Anderson et al., 2019), 100K molecules for training, 18K for validation and 13K for testing. |
| Hardware Specification | Yes | We also provide the average forward pass time in seconds for each of the models for a batch of 100 samples in a GTX 1080 Ti GPU. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Swish activation function' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | All algorithms are composed of 4 layers and have been trained under the same conditions, batch size 100, 10.000 epochs, Adam optimizer, the learning rate was tuned independently for each model. We used 64 features for the hidden layers in the Radial Field, the GNN and our EGNN. As non-linearity we used the Swish activation function. |