Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning

Authors: Robin Winter, Frank Noe, Djork-Arné Clevert

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our proposed model for graph reconstruction, generation and interpolation and evaluate the expressive power of extracted representations for downstream graph-level classification and regression.
Researcher Affiliation Collaboration Robin Winter Bayer AG Freie Universität Berlin robin.winter@bayer.com Frank Noé Freie Universität Berlin frank.noe@fu-berlin.de Djork-Arné Clevert Bayer AG djork-arne.clevert@bayer.com
Pseudocode No The paper does not contain an explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code available at https://github.com/jrwnter/pigvae
Open Datasets Yes We perform experiments on synthetically generated graphs and molecular graphs from the public datasets QM9 and Pub Chem. ... Erdos-Renyi graphs [47], ... Barabasi-Albert graphs [48], ... and Ego graphs. ... QM9 dataset [54, 55]. This datasets contains about 134 thousand organic molecules... We extracted organic molecules with up to 32 heavy atoms, resulting into a set of approximately 67 million compounds (more details in Appendix F) from the public Pub Chem database [58].
Dataset Splits Yes For all datasets we split the dataset into 80% train and 20% test data. For the QM9 dataset, we further split the train data into 90% train and 10% validation data. (From Appendix C)
Hardware Specification Yes Trainings were performed on a single GPU (NVIDIA A100-SXM4-40GB). (From Appendix C)
Software Dependencies No The paper mentions using the 'Adam optimizer' and provides its hyperparameters but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For all experiments, we used a batch size of 32, a learning rate of 10−4 and Adam optimizer (β1 = 0.9, β2 = 0.999). We train for 200 epochs on synthetic data and for 100 epochs on QM9 dataset. (From Appendix C)