Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Recognition Models to Learn Dynamics from Partial Observations with Neural ODEs
Authors: Mona Buisson-Fenet, Valery Morgenthaler, Sebastian Trimpe, Florent Di Meglio
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the performance of the proposed approach on numerical simulations and on an experimental dataset from a robotic exoskeleton. |
| Researcher Affiliation | Collaboration | Mona Buisson-Fenet EMAIL Ansys Research Team, Ansys France Centre Automatique et Systèmes, Mines Paris PSL University... Valery Morgenthaler EMAIL Ansys Research Team, Ansys France Sebastian Trimpe EMAIL Institute for Data Science in Mechanical Engineering RWTH Aachen University Florent Di Meglio EMAIL Centre Automatique et Systèmes, Mines Paris PSL University |
| Pseudocode | No | The paper describes methods and equations but does not present them in a structured pseudocode or algorithm block format. |
| Open Source Code | Yes | Implementation details are provided in the supplementary material, code to reproduce the experiments is available at https://anonymous.4open.science/r/structured_NODEs-7C23. |
| Open Datasets | Yes | We use a set of measurements collected from a robotic exoskeleton at Wandercraft, presented in (Vigne, 2021) and Fig. 8. |
| Dataset Splits | Yes | We directly train the NODE on a random subset of training trajectories, use a subset of validation trajectories for early stopping, and a subset of test trajectories to evaluate the performance of the learned model. ... We learn from N = 265 trajectories of a subset of input frequencies: {2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 11, 13, 15} Hz. We then evaluate on 163 test trajectories of 0.2 s from these input frequencies, to evaluate data fitting in the trained regime. We also evaluate on 52 longer (2 s) test trajectories from other input frequencies, to evaluate the interpolation capabilities of the learned model: {2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17} Hz. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam' optimizer and 'torchdiffeq' for automatic differentiation, but does not provide specific version numbers for these or other software components. |
| Experiment Setup | Yes | We train on N = 50 trajectories... We optimize the parameters using Adam... and a learning rate of 0.005. ... Both recognition and dynamics models are feed-forward networks with five hidden layers of 50 respectively 100 neurons and Si LU activation. ... We use the Adam... optimizer with decaying learning rate starting at 8 10 3 for the first two settings, 5 10 3 for the third setting. |