Convolutional Rectifier Networks as Generalized Tensor Decompositions

Authors: Nadav Cohen, Amnon Shashua

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we describe a construction based on generalized tensor decompositions, that transforms convolutional arithmetic circuits into convolutional rectifier networks. We then use mathematical tools available from the world of arithmetic circuits to prove new results. First, we show that convolutional rectifier networks are universal with max pooling but not with average pooling. Second, and more importantly, we show that depth efficiency is weaker with convolutional rectifier networks than it is with convolutional arithmetic circuits.
Researcher Affiliation Academia Nadav Cohen COHENNADAV@CS.HUJI.AC.IL Amnon Shashua SHASHUA@CS.HUJI.AC.IL The Hebrew University of Jerusalem
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper does not provide concrete access information for a publicly available or open dataset, as it is a theoretical work and does not use datasets for empirical evaluation.
Dataset Splits No The paper does not provide specific dataset split information, as it is a theoretical paper without empirical experiments.
Hardware Specification No The paper does not provide specific hardware details, as it is a theoretical paper and does not involve empirical experiments requiring hardware.
Software Dependencies No The paper does not provide specific ancillary software details, as it is a theoretical paper and does not describe software implementations or dependencies for experiments.
Experiment Setup No The paper does not contain specific experimental setup details, as it is a theoretical paper and does not describe empirical experiments or training configurations.