Time-Reversal Symmetric ODE Network
Authors: In Huh, Eunho Yang, Sung Ju Hwang, Jinwoo Shin
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate TRS-ODENs on several classical dynamics, and find they can learn the desired time evolution from observed noisy and complex trajectories. We also show that, even for systems that do not possess the full time-reversal symmetry, TRS-ODENs can achieve better predictive performances over baselines. |
| Researcher Affiliation | Collaboration | In Huh1,2, Eunho Yang3,4, Sung Ju Hwang3,4, Jinwoo Shin3,5 1Samsung Advanced Institute of Technology 2CSE Team, DIT Center, Samsung Electronics 3Graduate School of AI, Korea Advanced Institute of Science and Technology (KAIST) 4School of Computing, KAIST 5School of Electrical Engineering, KAIST in.huh@samsung.com, yangeh@gmail.com, {sjhwang82, jinwoos}@kaist.ac.kr |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code is available at https://github.com/inhuh/trs-oden. |
| Open Datasets | Yes | We validate our proposed model in several domains including synthetic Duffing oscillators [22] (see Section 4.1), real-world coupled oscillators [37] (see Section 4.2), and reversible strange attractors [41] (see Section 4.3). |
| Dataset Splits | No | The paper describes training and test splits, for instance, 'We use the first 3/5 of the trajectory for training, and the remains for test.' and 'generate 50 trajectories each for training and test sets.', but it does not explicitly mention a distinct validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, or cloud computing specifications, used for running its experiments. |
| Software Dependencies | No | The paper mentions optimizers (Adam) and numerical integration methods (Runge-Kutta, leapfrog), but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | Default model setting. We train models by using the Adam [19] with initial learning rate of 2 10 4 during 5,000 epochs. We use full-batch training because training sample sizes are quite small, except for Experiment VI. A single neural network fθ(q, p) is used for ODENs and TRS-ODENs, while HODENs consist of two neural networks Kθ1(p) and Vθ2(q), i.e., separable Hθ(q, p) = Kθ1(p) + Vθ1(q) [7]. Also, we use the leapfrog integrator for Solve [7], unless otherwise specified. The maximum allowed trajectory length at training phase is set to 10. |