Inductive Bias of Deep Convolutional Networks through Pooling Geometry
Authors: Nadav Cohen, Amnon Shashua
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The conclusions from our analyses are empirically validated in sec. 7. Finally, sec. 8 concludes. ... Our experiments are based on a synthetic classification benchmark inspired by medical imaging tasks. |
| Researcher Affiliation | Academia | Nadav Cohen & Amnon Shashua {cohennadav,shashua}@cs.huji.ac.il |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The latter is fully available online at https://github.com/HUJI-Deep/inductive-pooling. |
| Open Datasets | No | Our experiments are based on a synthetic classification benchmark inspired by medical imaging tasks. ... To generate labeled sets for classification (train and test), we render multiple images, sort them according to their closedness and symmetry... The paper created a synthetic dataset and does not provide a link, DOI, or citation for its public availability. |
| Dataset Splits | Yes | Each of the latter has 20000 images for training and 4000 images for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. It mentions 'computational constrained settings' in reference to Sim Nets, but not the hardware used for this paper's experiments. |
| Software Dependencies | No | Our implementation, available online at https://github.com/HUJI-Deep/inductive-pooling, is based on the Sim Nets branch (Cohen et al. (2016a)) of Caffe toolbox (Jia et al. (2014)). The objective function was... optimized using Adam (Kingma and Ba (2014)). The paper mentions 'Caffe toolbox' and 'Adam' but does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | Yes | In particular, our objective function was the cross-entropy loss with no L2 regularization (i.e. with weight decay set to 0), optimized using Adam (Kingma and Ba (2014)) with step-size α = 0.003 and moment decay rates β1 = β2 = 0.9. 15000 iterations with batch size 64 (48 epochs) were run, with the step-size α decreasing by a factor of 10 after 12000 iterations (38.4 epochs). |