ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs
Authors: Amir Gholaminejad, Kurt Keutzer, George Biros
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show results on Cifar-10/100 datasets using Res Net and Squeeze Next neural networks. and Finally we show results using ANODE, shown in Fig. 3 for a Squeeze Next network on Cifar-10 dataset. and Furthermore, Figure 4 shows results using a variant of Res Net-18, where the non-transition blocks are replaced with ODE blocks, on Cifar-10 dataset. |
| Researcher Affiliation | Academia | 1 Berkeley Artiļ¬cial Intelligence Research Lab, EECS, UC Berkeley 2 Oden Institute for Computational Engineering and Sciences, UT Austin |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about open-sourcing code or links to a code repository. |
| Open Datasets | Yes | We show results on Cifar-10/100 datasets using Res Net and Squeeze Next neural networks. |
| Dataset Splits | No | The paper mentions using Cifar-10/100 datasets and discusses training and testing performance, but does not provide specific details on dataset splits (e.g., percentages, sample counts, or explicit splitting methodology) for training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'MATLAB s ode45 solver' in an illustrative example, but it does not provide specific version numbers for the key software components or libraries used for implementing ANODE or running the experiments. |
| Experiment Setup | No | The paper discusses the use of 'Euler method' and 'RK-2' for solving ODEs and mentions replacing blocks in Squeeze Next and Res Net-18, but it does not provide specific hyperparameter values like learning rate, batch size, number of epochs, or optimizer settings. |