Scalars are universal: Equivariant machine learning, structured like classical physics
Authors: Soledad Villar, David W Hogg, Kate Storey-Fisher, Weichi Yao, Ben Blum-Smith
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complement our theory with numerical examples that show that the scalar-based method is simple, efficient, and scalable. ... We present numerical experiments using our scalar-based approach compared to other methods in Section 7 (see also [96]). |
| Researcher Affiliation | Academia | Soledad Villar Department of Applied Mathematics and Statistics Johns Hopkins University David W. Hogg Flatiron Institute a divison of the Simons Foundation Kate Storey-Fisher Center for Cosmology and Particle Physics Department of Physics, New York University Weichi Yao Department of Technology, Operations, and Statistics Stern School of Business, New York University Ben Blum-Smith Center for Data Science New York University |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available on Git Hub2, and it reuses much of the functionality provided by EMLP [28]. |
| Open Datasets | Yes | We demonstrate our approach using scalar-based multi-layer perceptrons (MLP) on two toy learning tasks from [28]: an O(5)-invariant task and an O(3)-equivariant task. |
| Dataset Splits | No | The paper mentions 'Test error as a function of training set size' but does not specify the splits for training, validation, or test sets in percentages or absolute counts, nor does it explicitly mention a validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions that the code reuses functionality from EMLP [28] and uses MLPs, but it does not specify software names with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper describes the tasks and models used (MLPs), but it does not provide specific experimental setup details such as learning rates, batch sizes, number of epochs, or optimizer settings. |