Gaussian Cardinality Restricted Boltzmann Machines
Authors: Cheng Wan, Xiaoming Jin, Guiguang Ding, Dou Shen
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real world data sets justify the effectiveness of the proposed method and its superiority over Ca RBM in terms of classification accuracy. |
| Researcher Affiliation | Collaboration | School of Software, Tsinghua University, Beijing, China Baidu Corporation, Beijing, China |
| Pseudocode | Yes | Algorithm 1 Learning Algorithm of GC-RBM on Pretraining Phase |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | The experiments were conducted on MNIST and CIFAR10 (Krizhevsky and Hinton 2009). |
| Dataset Splits | No | The paper specifies training and test set sizes for MNIST and CIFAR-10 (e.g., "60000 images for training and 10000 images for test" for MNIST), but does not explicitly mention a validation split or its size. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not mention any specific software libraries, frameworks, or their version numbers used in the implementation or experimentation. |
| Experiment Setup | Yes | The μ and σ in GC-RBM and naive GC-RBM were assigned to μ {10, 20, ..., 100}, σ2 {9, 25, 100}. In order to have comparisons among three models, the k in Ca RBM was assigned to the same values as the μ in our Gaussian thresholds models. We applied the Ca RBM, naive GC-RBM and GC-RBM to train a three layers (784-100-10) feed-forward neural network... |