Symmetry Discovery Beyond Affine Transformations

Authors: Ben Shaw, Abram Magner, Kevin Moon

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally compare our method to an existing method known as Lie GAN and show that our method is competitive at detecting affine symmetries for large sample sizes and superior than Lie GAN for small sample sizes. We also show our method is able to detect continuous symmetries beyond the affine group and is generally more computationally efficient than Lie GAN.
Researcher Affiliation Academia Ben Shaw Utah State University Logan, UT 84322 ben.shaw@usu.edu Abram Magner University at Albany Albany, NY 12222 amagner@albany.edu Kevin R. Moon Utah State University Logan, UT 84322 kevin.moon@usu.edu
Pseudocode Yes Algorithm 1 Constructing the Extended Feature Matrix M; Algorithm 2 Constructing the Invariant Function Extended Feature Matrix M2
Open Source Code Yes The code used for the main experiments is available at https://github. com/Kevin Moon Lab/Symmetry ML. Additionally, all of the data we have generated for the main experiments is made available.
Open Datasets Yes Our next experiment uses real data that is publicly available, which dataset we refer to as the Bear Lake Weather dataset [17]. In this experiment, we analyze the symmetries of a model-induced probability distribution arising from the Palmer Penguins dataset [24].
Dataset Splits Yes Using scikit-learn [25], we train a random forest regressor on this data using a random train/test split with the test size proportion being 0.2.
Hardware Specification No The paper only provides computation times for experiments (e.g., 'Lie GAN time (s) Our time (s)') but does not specify the hardware used (e.g., CPU/GPU model, memory).
Software Dependencies No The paper mentions 'Mc Torch [16]', 'python s scipy package', and 'scikit-learn [25]' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Then, with a model using affine coefficients for the estimated vector field, we estimate W in Equation (13) using the L1 loss function and the Riemannian Adagrad optimization algorithm [16] with learning rate 0.01 for 5000 epochs. Optimized using Riemannian stochastic gradient descent with a learning rate of 0.001 and trained for 5000 epochs. Using the L1 loss function and the Riemannian Adagrad optimization algorithm [16] with learning rate 0.01...