IMEXnet A Forward Stable Deep Neural Network
Authors: Eldad Haber, Keegan Lensink, Eran Treister, Lars Ruthotto
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4, we conduct numerical experiments on a synthetic dataset that is constructed to demonstrate the advantages and limitations of the method, as well as on the NYU depth dataset. |
| Researcher Affiliation | Collaboration | 1Department of Earth, Ocean and Atmospheric Sciences, University of British Columbia, Vancouver, Canada 2Xtract AI, Vancouver, Canada 3Department of Computer Science, Ben Gurion University of the Negev, Be er Sheva, Israel 4Departments of Mathematics and Computer Science, Emory University, Atlanta, GA, USA. |
| Pseudocode | Yes | Algorithm 1 py Torch implementation of the implicit convolution. |
| Open Source Code | Yes | Our code is available at https://github. com/Haber Group/Semi Implicit DNNs. |
| Open Datasets | Yes | The NYU-Depth V2 dataset is a set of indoor images recorded by both RGB and Depth cameras from the Microsoft Kinect. Four different scenes from the dataset are plotted in Figure 3. The goal of our network is to use the RGB images in order to predict the depth images. |
| Dataset Splits | Yes | For the following experiments we generate a dataset of 1024 training examples and 64 validation examples. For the kitchen dataset we used only 8 training images, 2 validation image and one test image. |
| Hardware Specification | No | The paper mentions 'GPU acceleration' but does not specify any particular hardware models (e.g., CPU, GPU, or TPU models) used for the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch code' and 'existing machine learning packages' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In both cases, we use stochastic gradient descent to minimize the weighted cross entropy loss for 200 epochs with a learning rate of 0.001 and a batch size of 8. We use 500 epochs to fit the data. |