Learning invariant representations of time-homogeneous stochastic dynamical systems
Authors: Vladimir R Kostic, Pietro Novelli, Riccardo Grazzi, Karim Lounici, Massimiliano Pontil
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our method against state-of-the-art approaches on different datasets, showing better performance across the board. ...Numerical experiments illustrate the versatility and competitive performance of our approach against several baselines. |
| Researcher Affiliation | Collaboration | Vladimir R. Kostic Istituto Italiano di Tecnologia University of Novi Sad ... Pietro Novelli Istituto Italiano di Tecnologia ... Riccardo Grazzi Istituto Italiano di Tecnologia University College London ... Karim Lounici CMAP École Polytechnique ... Massimiliano Pontil Istituto Italiano di Tecnologia University College London |
| Pseudocode | Yes | Algorithm 1 DPNets Training |
| Open Source Code | Yes | The code to reproduce the examples can be found at https://pietronvll.github.io/DPNets/, and it heavily depends on Kooplearn https://kooplearn.readthedocs.io/. |
| Open Datasets | Yes | Ordered MNIST Following Kostic et al. (2022), we create a stochastic dynamical system by randomly sampling images from the MNIST dataset... Fluid dynamics We study the classical problem of the transport of a passive scalar field by a 2D fluid flow past a cylinder (Raissi et al., 2020). ...Metastable states of Chignolin We study the dynamics of Chignolin... (Lindorff-Larsen et al., 2011). |
| Dataset Splits | Yes | To have a fair comparison, every neural network model in these experiments has been trained on the same data splits, batch sizes, number of epochs, architectures and seeds. The learning rate, however, has been optimized for each one separately. We defer every technical detail, as well as additional results to App. F. ...For training all the models (DPNets, VAMPNets, and DAE) we use Adam (Kingma and Ba, 2014) as optimizer with a learning rate tuned with a random search over 100 samples in [10−5, 10−3], and a batch size of 2000. We train for 10000 epochs. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states that the code depends on Kooplearn but does not provide specific version numbers for software dependencies like Python, PyTorch, CUDA, or other libraries. |
| Experiment Setup | Yes | To have a fair comparison, every neural network model in these experiments has been trained on the same data splits, batch sizes, number of epochs, architectures and seeds. The learning rate, however, has been optimized for each one separately. We defer every technical detail, as well as additional results to App. F. ...For training all the models (DPNets, VAMPNets, and DAE) we use Adam (Kingma and Ba, 2014) as optimizer with a learning rate tuned with a random search over 100 samples in [10−5, 10−3], and a batch size of 2000. We train for 10000 epochs. |