Invariance and Stability of Deep Convolutional Representations

Authors: Alberto Bietti, Julien Mairal

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we study deep signal representations that are near-invariant to groups of transformations and stable to the action of diffeomorphisms without losing signal information. This is achieved by generalizing the multilayer kernel introduced in the context of convolutional kernel networks and by studying the geometry of the corresponding reproducing kernel Hilbert space. We show that the signal representation is stable, and that models from this functional space, such as a large class of convolutional neural networks, may enjoy the same stability.
Researcher Affiliation Academia Alberto Bietti Inria alberto.bietti@inria.fr Julien Mairal Inria julien.mairal@inria.fr Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any concrete access to source code (e.g., specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets No The paper is theoretical and does not describe experiments, therefore no information about publicly available datasets or their access is provided.
Dataset Splits No The paper is theoretical and does not describe experiments or data partitioning, therefore no specific dataset split information is provided.
Hardware Specification No The paper is theoretical and does not describe experiments, therefore no specific hardware details are provided.
Software Dependencies No The paper is theoretical and does not describe experiments, therefore no specific ancillary software details with version numbers are provided.
Experiment Setup No The paper is theoretical and does not describe experiments, therefore no specific experimental setup details (e.g., hyperparameters, training configurations) are provided.