Group Reconstruction and Max-Pooling Residual Capsule Network
Authors: Xinpeng Ding, Nannan Wang, Xinbo Gao, Jie Li, Xiaoyu Wang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on CIFAR-10/100 and SVHN datasets and the results show that our method can perform better against state-of-the-arts. We perform experiments with the capsule network [Sabour et al., 2017] named Caps Net I as baseline for comparison with the proposed G-Caps Net. |
| Researcher Affiliation | Collaboration | Xinpeng Ding1,2 , Nannan Wang1,3 , Xinbo Gao1,2 , Jie Li1,2 and Xiaoyu Wang4 1State Key Laboratory of Integrated Services Networks, Xidian University, Xi an, China 2School of Electronic Engineering, Xidian University,Xi an, China 3School of Telecommunications Engineering, Xidian University, Xi an, China 4Intelli Fusion, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1 Group Reconstruction Routing Algorithm. Input: low-level capsules um Output: high-level capsules vt 1: for all capsules um: νt|m CTLt(um) 2: divide all capsule um into groups Gn; 3: for all groups Gn do 4: for all capsule νt|m Gn: µt|m fµ(νt|m), σ2 t|m fσ(νt|m), ˆvt fv(νt|m) 5: for all capsules ˆvt: vt N(ˆvt; ut|m, σ2 t|m) 6: for all capsules vt: ˆum gˆu(vt) 7: end for 8: for all capsules ˆum: LRecon = D(ˆum, um)2 9: fµ, fσ, fv, gˆu BP(LRecon) 10: return vt |
| Open Source Code | No | The paper does not provide any specific links or statements regarding the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We do experiments on both CIFAR [Krizhevsky and Hinton, 2009] and SVHN [Netzer et al., 2011] datasets to evaluate the performance of our network. |
| Dataset Splits | No | The Street View House Number dataset has 73257 colored digits for training, 26032 digits for testing, with an additional 531131 training images available. We adapt the standard data augmentation: horizontal flipping and shifting to the datasets. The paper does not explicitly state the specific validation set splits or methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers and activation functions but does not specify any software libraries or dependencies with version numbers. |
| Experiment Setup | Yes | Initial learning rate is set to 0.0001 and maximum epoch is 80. Adam [Kingma and Ba, 2014] is used with momentum 0.9. Batch size [Ioffe and Szegedy, 2015] is 128. |