PDE-Driven Spatiotemporal Disentanglement
Authors: Jérémie Donà, Jean-Yves Franceschi, sylvain lamprier, patrick gallinari
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally demonstrate the performance and broad applicability of our method against prior state-of-the-art models on physical and synthetic video datasets. We study in this section the experimental results of our model on various spatiotemporal phenomena with physical, synthetic video and real-world datasets, which are briefly presented in this section and in more details in Appendix D. We demonstrate the relevance of our model with ablation studies and its performance by comparing it with more complex state-of-the-art models. |
| Researcher Affiliation | Collaboration | Sorbonne Université, CNRS, LIP6, F-75005 Paris, France Criteo AI Lab, Paris, France |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is also publicly released at the following URL: https://github.com/JeremDona/spatiotemporal_variable_separation. |
| Open Datasets | Yes | Moving MNIST dataset (Srivastava et al., 2015), 3D Warehouse Chairs dataset introduced by Aubry et al. (2014), Taxi BJ dataset (Zhang et al., 2017), KTH (Schüldt et al., 2004), and SST, derived from the data assimilation engine NEMO (Madec & Team). Footnote 2 for NEMO links to 'resources.marine.copernicus.eu'. |
| Dataset Splits | Yes | The dataset is then split into training (240 sequences) and testing (60 sequences) sets. Training sequences correspond to randomly selected chunks of length ν = 10 in the first 2987 acquisitions (corresponding to 80% of total acquisitions), and testing sequences to all possible chunks of length ν = 10 in the remaining 747 acquisitions. |
| Hardware Specification | No | Each model was trained on an Nvidia GPU with CUDA 10.1. - This mentions a GPU brand but not a specific model or other hardware details. |
| Software Dependencies | Yes | We used Python 3.8.1 and Py Torch 1.4.0 (Paszke et al., 2019) to implement our model. |
| Experiment Setup | Yes | Optimization is performed using the Adam optimizer (Kingma & Ba, 2015) with initial learning rate 4 10 4 for Wave Eq, Wave Eq-100, Moving MNIST, 3D Warehouse Chairs and SST and 4 10 5 for Taxi BJ, and with decay rates β1 = 0.9 (except for the experiments on Moving MNIST where we choose β1 = 0.5) and β2 = 0.99. The batch size is chosen to be 128 for Wave Eq, Wave Eq-100, Moving MNIST and 3D Warehouse Chairs, and 100 for SST and Taxi BJ. The paper also provides detailed values for lambda coefficients. |