Approximately Equivariant Networks for Imperfectly Symmetric Dynamics
Authors: Rui Wang, Robin Walters, Rose Yu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments Baselines We compare with several state-of-the-art (SoTA) methods from those without symmetry bias to perfect symmetry and SoTA approximately symmetric models. ... Table 1: Prediction RMSE on three synthetic smoke plume datasets with approximate symmetries. |
| Researcher Affiliation | Academia | 1University of California San Diego 2Northeastern University. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1We open-source our code https://github.com/ Rose-STL-Lab/Approximately-Equivariant-Nets |
| Open Datasets | Yes | The synthetic 64 × 64 2-D smoke datasets are generated by Phi Flow (Holl et al., 2020) and contain smoke simulations with different initial conditions and external forces. ... We use real experimental data on 2D turbulent velocity in NASA multi-stream jets that are measured using time-resolved particle image velocimetry (Bridges & Wernet, 2017). ... We use the reanalysis ocean current velocity data generated by the NEMO ocean engine (Madec, 2008). |
| Dataset Splits | No | The paper states 'For test-domain, we train and test on different simulations/regions with an 80%-20% split' and discusses training and testing, but it does not explicitly provide details about a validation dataset split (e.g., percentages or counts for a distinct validation set). |
| Hardware Specification | No | The paper mentions 'This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility', but it does not specify concrete hardware details like exact GPU/CPU models or processor types. |
| Software Dependencies | No | The paper mentions external tools used for data generation (e.g., 'Phi Flow (Holl et al., 2020)', 'NEMO ocean engine (Madec, 2008)'), but it does not provide specific software dependencies or library versions (e.g., PyTorch version, Python version) for the model implementation itself. |
| Experiment Setup | Yes | We perform a grid hyperparameter search as shown in Table 3, including learning rate, batch size, hidden dimension, number of layers, number of prediction errors steps for training. We also tune the number of filter banks for group convolution-based models and the coefficient of weight constraints for relaxed weight-sharing models. The input length is fixed as 10. |