Spectral Networks and Locally Connected Networks on Graphs
Authors: Joan Bruna; Wojciech Zaremba; Arthur Szlam; Yann LeCun
ICLR 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show through experiments that for lowdimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures. 5 Numerical Experiments The previous constructions are tested on two variations of the MNIST data set. |
| Researcher Affiliation | Academia | Joan Bruna New York University bruna@cims.nyu.edu Wojciech Zaremba New York University woj.zaremba@gmail.com Arthur Szlam The City College of New York aszlam@ccny.cuny.edu Yann Le Cun New York University yann@cs.nyu.edu |
| Pseudocode | No | The paper describes algorithms in text and mathematical equations but does not present them in a structured pseudocode or algorithm block format. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | The previous constructions are tested on two variations of the MNIST data set. |
| Dataset Splits | No | The paper states 'We train the models with cross-entropy loss, using a fixed learning rate of 0.1 with momentum 0.9.' but does not specify the dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | In all the experiments, we use Rectified Linear Units as nonlinearities and max-pooling. We train the models with cross-entropy loss, using a fixed learning rate of 0.1 with momentum 0.9. |