Neural Contractive Dynamical Systems

Authors: Hadi Beik Mohammadi, Søren Hauberg, Georgios Arvanitidis, Nadia Figueroa, Gerhard Neumann, Leonel Rozo

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we demonstrate that our approach encodes the desired dynamics more accurately than the current state-of-the-art, which provides less strong stability guarantees.
Researcher Affiliation Collaboration 1 Bosch Center for Artificial Intelligence (BCAI), 2 Karlsruhe Institute of Technology (KIT), 3 Technical University of Denmark (DTU), 4 University of Pennsylvania (UPenn).
Pseudocode Yes Algorithm 1: Neural Contractive Dynamical Systems (NCDS): Training in task space; Algorithm 2: Neural Contractive Dynamical Systems (NCDS): Robot Control Scheme
Open Source Code No The paper provides a link to a project website for videos (https://sites.google.com/view/neuralcontraction/home), but does not state that the source code for the methodology is available or provide a link to a code repository.
Open Datasets Yes First, we test our approach on the LASA dataset (Lemme et al., 2015), often used for benchmarking asymptotic stability.
Dataset Splits No The paper describes the datasets used and how they were augmented (e.g., stacking trajectories for LASA-4D/8D), but it does not specify explicit training, validation, and test splits (e.g., percentages or exact counts) for reproducibility of the data partitioning.
Hardware Specification No The paper mentions a '7-Do F Franka-Emika Panda robot' for experiments but does not provide specific details about the computational hardware (e.g., CPU/GPU models, memory) used for training or inference.
Software Dependencies No The paper mentions 'Py Torch framework (Paszke et al., 2019)' and 'odeint from the torchdiffeq Python package (Chen, 2018)' but does not provide specific version numbers for these software components or Python itself.
Experiment Setup Yes For the VAE, we employed an injective generator based on M-flows (Brehmer & Cranmer, 2020), specifically using rational-quadratic neural spline flows. These flows were structured with three coupling layers. Within each coupling transform, half of the input values underwent elementwise transformation using a monotonic rational-quadratic spline. The parameters of these splines were determined through a residual network comprising two residual blocks, with each block consisting of a single hidden layer containing 30 nodes. Throughout the network, we employed Tanh activations and did not incorporate batch normalization or dropout techniques. The rational-quadratic splines were constructed with ten bins, evenly distributed over the range of (-10, 10). The Jacobian network was implemented using a neural network architecture consisting of two hidden layers, with each layer containing 500 nodes... The training process includes 1000 epochs with ADAM optimizer.