On Robustness of Neural Ordinary Differential Equations
Authors: Hanshu YAN, Jiawei DU, Vincent TAN, Jiashi FENG
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first present an empirical study on the robustness of the neural ODE-based networks (ODENets) by exposing them to inputs with various types of perturbations and subsequently investigating the changes of the corresponding outputs. |
| Researcher Affiliation | Academia | Department of Electrical and Computer Engineering National University of Singapore |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper states: 'Our implementation builds on the open-source neural ODE codes.1' with footnote 1 pointing to 'https://github.com/rtqichen/torchdiffeq.' This indicates they used an existing open-source library, not that they released their own code for the specific methodology described in this paper (e.g., Tis ODE). |
| Open Datasets | Yes | We conduct experiments to compare the robustness of ODENets with CNN models on three datasets, i.e., the MNIST (Le Cun et al., 1998), the SVHN (Netzer et al., 2011), and a subset of the Image Net datset (Deng et al., 2009). |
| Dataset Splits | No | The paper specifies '3,000 training images and 300 test images' for the Img Net10 dataset, but does not mention a distinct validation split for hyperparameter tuning or early stopping. For MNIST and SVHN, no specific split percentages or counts are given for training, validation, or testing, only that they are used. |
| Hardware Specification | No | No specific hardware details such as GPU models (e.g., NVIDIA A100, RTX 2080 Ti) or CPU models were mentioned in the paper for running the experiments. |
| Software Dependencies | No | The paper mentions using 'the Py Torch framework' and that their 'implementation builds on the open-source neural ODE codes' (referencing `torchdiffeq`), but no specific version numbers for PyTorch or the neural ODE codes are provided. |
| Experiment Setup | Yes | Here, we use the easily-implemented Euler method in the experiments. To balance the computation and the continuity of the flow, we solve the ODE initial value problem in equation (1) by the Euler method with step size 0.1. ... all the hyperparameters are kept the same, including training epochs, learning rate schedules, and weight decay coefficients. ... set the weight decay parameters for all models to be 0.0005. ... The regularization parameter for the steady-state loss Lss is set to be 0.1. |