LieGG: Studying Learned Lie Group Generators
Authors: Artem Moskalev, Anna Sepliarskaia, Ivan Sosnovik, Arnold Smeulders
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments Firstly, we conduct the experiment on the synthetic problem to test the ability of our method to accurately retrieve a symmetry learned by a network. To do so, we adopt the invariant regression task from [14] with clean O(5) symmetry built-in. |
| Researcher Affiliation | Collaboration | Artem Moskalev Uv A-Bosch Delta Lab University of Amsterdam a.moskalev@uva.nl Anna Sepliarskaia Machine Learning Research Unit TU Wien seplanna@gmail.com Ivan Sosnovik Uv A-Bosch Delta Lab University of Amsterdam i.sosnovik@uva.nl Arnold Smeulders Uv A-Bosch Delta Lab University of Amsterdam a.w.m.smeulders@uva.nl |
| Pseudocode | No | The paper describes steps and calculations in text, but it does not contain structured pseudocode or algorithm blocks with formal labels. |
| Open Source Code | Yes | Source code: https://github.com/amoskalev/liegg |
| Open Datasets | Yes | In this experiment, we test the ability of our method to retrieve learned symmetries on the rotation MNIST dataset [23]. |
| Dataset Splits | Yes | Once the network s performance plateaus on the validation split, we terminate the training, and apply our method to retrieve learned symmetries. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow). |
| Experiment Setup | No | The paper discusses varying network architectures (number of parameters from [40000, 200000] and depth from 1 to 5 hidden layers) and training until validation convergence. However, it does not explicitly state specific hyperparameter values like learning rate, batch size, or optimizer settings for reproducibility. |