Regularizing Towards Soft Equivariance Under Mixed Symmetries
Authors: Hyunsu Kim, Hyungi Lee, Hongseok Yang, Juho Lee
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using synthetic function approximation and motion forecasting tasks, we demonstrate that our method achieves better accuracy than prior approaches while discovering the approximate symmetry levels correctly. We experimentally evaluated our method with a synthetic function-approximation task and a motion forecasting task. |
| Researcher Affiliation | Collaboration | 1Kim Jaechul Graduate School of AI, KAIST, Daejeon, South Korea 2School of Computing, KAIST, Daejeon, South Korea 3Discrete Mathematics Group, Institute for Basic Science (IBS), Daejeon, South Korea 4AITRICS, Seoul, South Korea. |
| Pseudocode | No | No section explicitly labeled 'Pseudocode' or 'Algorithm' was found within the paper. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing open-source code or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | We collect the trajectories from Waymo Open Motion Dataset (WOMD) (Ettinger et al., 2021) containing trajectories of vehicles moving on roads. We use 16,814 trajectories for training, 3,339 trajectories for validation, and 3,563 trajectories for testing. |
| Dataset Splits | Yes | We use 16,814 trajectories for training, 3,339 trajectories for validation, and 3,563 trajectories for testing. (WOMD) and Training Samples 1,000 Validation Samples 1,000 Testing Samples 1,000 (Inertia, Cos Sim) |
| Hardware Specification | Yes | All experiments were trained and evaluated on RTX 3090 devices. |
| Software Dependencies | No | The paper mentions 'ADAM' as an optimizer but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Additional information regarding the experiments, such as the specific hyperparameters employed and the data preprocessing details applied, can be found in Appendix F. Furthermore, Appendix C provides insightful recommendations for efficient initializations of neural networks in the PER settings. Table 5: Common hyperparameter settings for each task. Table 6: Hyperparameter setting of our model. |