Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Spectral Networks and Locally Connected Networks on Graphs

Authors: Joan Bruna; Wojciech Zaremba; Arthur Szlam; Yann LeCun

ICLR 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show through experiments that for lowdimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures. 5 Numerical Experiments The previous constructions are tested on two variations of the MNIST data set.
Researcher Affiliation Academia Joan Bruna New York University EMAIL Wojciech Zaremba New York University EMAIL Arthur Szlam The City College of New York EMAIL Yann Le Cun New York University EMAIL
Pseudocode No The paper describes algorithms in text and mathematical equations but does not present them in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes The previous constructions are tested on two variations of the MNIST data set.
Dataset Splits No The paper states 'We train the models with cross-entropy loss, using a fixed learning rate of 0.1 with momentum 0.9.' but does not specify the dataset splits for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes In all the experiments, we use Rectified Linear Units as nonlinearities and max-pooling. We train the models with cross-entropy loss, using a fixed learning rate of 0.1 with momentum 0.9.