Stochastic Deep Networks

Authors: Gwendoline De Bie, Gabriel Peyré, Marco Cuturi

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide a theoretical analysis of these building blocks, review our architectures approximation abilities and robustness w.r.t. perturbation, and try them on various discriminative and generative tasks. To exemplify the use of our stochastic deep architectures, we consider classification, generation and dynamic prediction tasks. The goal is to highlight the versatility of these architectures and their ability to handle as input and/or output both probability distributions and vectors. In all cases, the procedures displayed similar results when rerun, hence results can be considered as quite stable and representative. Table 1 displays our results, compared with the Point Net (Qi et al., 2016) baseline. Table 2. Model Net40 classification results
Researcher Affiliation Collaboration 1 Ecole Normale Sup erieure, DMA, Paris, France 2CNRS 3CREST/ENSAE Paristech 4Google Brain, Paris, France.
Pseudocode No The paper defines mathematical operations and blocks but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We perform classification on the 2-D MNIST dataset of handwritten digits. We evaluate our model on the Model Net40 (Wu et al., 2015b) shape classification benchmark.
Dataset Splits Yes The dataset contains 3-D CAD models from 40 manmade categories, split into 9,843 examples for training and 2,468 for testing. The weights are learnt with a weighted cross-entropy loss function over a training set of 55,000 examples and tested on a set of 10,000 examples.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU model, CPU type, memory).
Software Dependencies No The paper mentions using 'Adam optimizer (Kingma and Ba, 2014)' and 'Sinkhorn s algorithm (Cuturi, 2013)' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The weights are learnt with a weighted cross-entropy loss function over a training set of 55,000 examples and tested on a set of 10,000 examples. Initialization is performed through the Xavier method (Glorot and Bengio, 2010) and learning with the Adam optimizer (Kingma and Ba, 2014). The layer dimensions are [3, 10, 500, 800, 40] (for ModelNet40 classification network).