Clifford Group Equivariant Neural Networks
Authors: David Ruhe, Johannes Brandstetter, Patrick Forré
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate, notably from a single core implementation, state-of-the-art performance on several distinct tasks, including a three-dimensional n-body experiment, a four-dimensional Lorentz-equivariant high-energy physics experiment, and a five-dimensional convex hull experiment. |
| Researcher Affiliation | Collaboration | David Ruhe AI4Science Lab, AMLab, API University of Amsterdam david.ruhe@gmail.com Johannes Brandstetter Microsoft Research AI4Science brandstetter@ml.jku.at Patrick Forré AI4Science Lab, AMLab University of Amsterdam p.d.forre@uva.nl |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Code is available at https://github.com/David Ruhe/clifford-group-equivariant-neural-networks |
| Open Datasets | Yes | The n-body experiment [KFW+18] serves as a benchmark for assessing the performance of equivariant (graph) neural networks in simulating physical systems [HRXH22]. |
| Dataset Splits | Yes | The task is to estimate the function f(x1, x2) := sin( x1 ) x2 3/2 + x 1 x2 x1 x2 , where the five-dimensional vectors x1, x2 are sampled from a standard Gaussian distribution in order to simulate train, test, and validation datasets. |
| Hardware Specification | Yes | The volumetric quantities and regression experiments were carried out on 1 11 GB NVIDIA Ge Force GTX 1080 Ti and 1 11 GB NVIDIA Ge Force GTX 2080 Ti instances. The n-body experiment ran on 1 24 GB NVIDIA Ge Force RTX 3090 and 1 24 GB NVIDIA RTX A5000 nodes. Finally, the top tagging experiment was conducted on 4 40 GB NVIDIA Tesla A100 Ampere instances. |
| Software Dependencies | Yes | This work made use of Python (PSF License Agreement), Num Py [VDWCV11] (BSD-3 Clause), Py Torch [PGM+19] (BSD-3 Clause), CUDA (proprietary license), Weights and Biases [Bie20] (MIT), Scikit-Learn [PVG+11] (BSD-3 Clause), Seaborn [Was21] (BSD-3 Clause), Matplotlib [Hun07] (PSF), [VGO+20] (BSD-3). |
| Experiment Setup | No | The paper mentions keeping parameter budgets similar to baselines and providing experimental details in code, but does not include specific hyperparameter values or training configurations in the main text. "Parameter budgets as well as training setups are kept as similar as possible to the baseline references. All further experimental details can be found in the public code release." |