SubSpace Capsule Network
Authors: Marzieh Edraki, Nazanin Rahnavard, Mubarak Shah10745-10753
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Effectiveness of SCN is evaluated through a comprehensive set of experiments on supervised image classification, semi-supervised image classification and high-resolution image generation tasks using the generative adversarial network (GAN) framework. SCN significantly improves the performance of the baseline models in all 3 tasks. |
| Researcher Affiliation | Academia | Marzieh Edraki,1 Nazanin Rahnavard,1,2 Mubarak Shah1 1Center for Research in Computer Vision 2Department of Electrical and Computer Engineering University of Central Florida Orlando, Florida, USA, 32816 m.edraki@knights.ucf.edu, nazanin@eecs.ucf.edu, shah@crcv.ucf.edu |
| Pseudocode | No | The paper describes algorithms and mathematical formulations but does not present any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code:http://github.com/Marzi Ed/Sub Space-Capsule-Network |
| Open Datasets | Yes | Datasets: We use CIFAR10 (Krizhevsky and Hinton 2009), Street View House Number (SVHN) (Netzer et al. 2011), Image Net (Deng et al. 2009), Celeb A (Liu et al. 2015), and 3 categories of Lsun dataset, namely bedroom, cat and horse, throughout our experiments. |
| Dataset Splits | Yes | We hold out a set of 5000 training samples as our validation set for subspace capsule dimension selection, and fine tune the whole model on all training samples afterward. |
| Hardware Specification | No | The paper mentions 'parallel computation on GPUs' and acknowledges 'access to the CASS GPU cluster supported in parts by the US Army/DURIP program W911NF-17-1-0208', but it does not specify any particular GPU models, CPU types, or other detailed hardware specifications. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch, or other libraries). |
| Experiment Setup | Yes | We train the network using Adam optimizer with initial learning rate of 0.0003 with β1 = 0.5 and β2 = 0.99. The model was trained using SGD with momentum rate of 0.9 for 100 epochs. The learning rate is initialized as 0.1 and decayed every 30 epochs with the rate of 0.1. We train this model using Adam optimizer with initial learning rate of 0.0002 for 25 epochs. |