Learning Infinitesimal Generators of Continuous Symmetries from Data

Authors: Gyeonghoon Ko, Hyunsu Kim, Juho Lee

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method mainly in two domains: image data and partial differential equations, and demonstrate its advantages. Our codes are available at https: //github.com/kogyeonghoon/learning-symmetry-from-scratch.git.
Researcher Affiliation Academia Gyeonghoon Ko, Hyunsu Kim, Juho Lee Kim Jaechul Graduate School of AI KAIST Seoul, South Korea {kog, kim.hyunsu, juholee}@kaist.ac.kr
Pseudocode No The paper describes its methods textually and mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our codes are available at https: //github.com/kogyeonghoon/learning-symmetry-from-scratch.git.
Open Datasets Yes We use images of size 32 32 from the CIFAR-10 classification task.
Dataset Splits No The paper details training and test procedures but does not explicitly mention a dedicated validation dataset split or its specific use.
Hardware Specification Yes The learning process takes less than 10 hours on a Ge Force RTX 2080 Ti GPU.
Software Dependencies No The paper mentions various software components and methods (e.g., Neural ODE, MLP, ResNet-18, SGD, Adam, WENO scheme) but does not provide specific version numbers for these, which is necessary for reproducible software dependencies.
Experiment Setup Yes We learn the Equation 14 using stochastic gradient descent with wsym = 1 and wortho, w Lips = 10. The parameter σ, which controls the scale of transformation, is set to σ = 0.4, and the Lipschitz threshold τ is set to τ = 0.5. ... When training the Res Net-18 with CIFAR-10, both the feature extractor Hfext and models after augmentation, we train the model in 200 epochs with a batch size 128. The learning rate is set to 10 1 and decreases by a factor of 0.2 at the 60th, 120th, and 160th epoch. The model is trained by SGD optimizer with Nesterov momentum 0.9 and weight decay 0.4.