Ladder Capsule Network
Authors: Taewon Jeong, Youngmin Lee, Heeyoung Kim
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on MNIST demonstrate that the ladder capsule network learns an equivariant representation and improves the capability to extrapolate or generalize to pose variations. |
| Researcher Affiliation | Academia | 1Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea. Correspondence to: Heeyoung Kim <heeyoungkim@kaist.ac.kr>. |
| Pseudocode | Yes | Algorithm 1 Dynamic routing algorithm (Sabour et al., 2017) |
| Open Source Code | Yes | The code for L-Caps Net is available at https://github.com/taewonjeong/L-Caps Net. |
| Open Datasets | Yes | The experiments on MNIST demonstrate that the ladder capsule network learns an equivariant representation and improves the capability to extrapolate or generalize to pose variations. |
| Dataset Splits | No | The paper specifies training and testing set sizes (60,000 training, 10,000 testing for MNIST), but does not explicitly mention a separate validation set or specify train/validation/test split percentages beyond a training and testing split. |
| Hardware Specification | No | The paper does not explicitly state the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments. It only mentions 'average computation time' without hardware context. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Re Lu activation function' but does not specify any software libraries or dependencies with their version numbers. |
| Experiment Setup | Yes | We trained the L-Caps Net with the margin loss with m+ = 0.9, m = 0.1, and λ = 0.5. In addition, we found that adding the loss of difference between the code vector and lower-level activity level, cl Al 2, would be helpful for training; thus we trained the L-Caps Net with the loss L = Lmargin + ϵ cl Al 2 with ϵ = 0.0001. We used the Adam optimizer with exponentially decaying learning rate starting from 0.001. |