Deep Continuous Networks

Authors: Nergis Tomen, Silvia-Laura Pintea, Jan Van Gemert

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that DCNs are versatile and highly applicable to standard image classification and reconstruction problems, where they improve parameter and data efficiency, and allow for metaparametrization. We illustrate the biological plausibility of the scale distributions learned by DCNs and explore their performance in a neuroscientifically inspired pattern completion task.
Researcher Affiliation Academia Nergis Tomen 1 Silvia L. Pintea 1 Jan C. van Gemert 1 1Computer Vision Lab, Delft University of Technology, Delft, Netherlands.
Pseudocode No The paper describes the architecture and components of DCNs, including equations (Eq. 1, Eq. 2), but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes All our code is available at2. 2https://github.com/ntomen/Deep-Continuous-Networks
Open Datasets Yes We train our networks using cross-entropy loss and the CIFAR-10 dataset (Krizhevsky, 2009).
Dataset Splits Yes We use the CIFAR-10 dataset with the standard 50k train/10k validation split.
Hardware Specification No Finally, we adapt the GPU implementation of ODE solvers to solve the equations of motion for a predefined time interval t [0, T] using the adaptive step size DOPRI method. This only mentions 'GPU implementation' but no specific hardware model or details.
Software Dependencies No The paper mentions using 'Adam optimizer', 'CELU' activation, and refers to 'torchdiffeq' (from the footnote 1 link 'https://github.com/rtqichen/torchdiffeq/') but does not provide specific version numbers for these software components or the underlying programming language.
Experiment Setup Yes All models were trained for 300 epochs using the Adam optimizer with a learning rate of 0.001 and a batch size of 128.