Echo-State Conditional Restricted Boltzmann Machines

Authors: Sotirios Chatzis

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our methods to sequential data modeling and classification experiments using public datasets. As we experimentally demonstrate, our methods outperform alternative RBM-based approaches, as well as other stateof-the-art methods, such as CRFs, in both data modeling and classification applications from diverse domains.
Researcher Affiliation Academia Sotirios P. Chatzis Department of Electrical Engineering, Computer Engineering, and Informatics Cyprus University of Technology Limassol 3603, Cyprus soteri0s@mac.com
Pseudocode No The paper describes algorithmic steps in text and bullet points but does not include structured pseudocode or algorithm blocks with formal labels.
Open Source Code No The paper does not provide concrete access to source code, such as a specific repository link, an explicit code release statement, or code in supplementary materials, for the methodology described in this paper.
Open Datasets Yes our experiments are based on the dataset described in (Ni, Wang, and Moulin 2011).
Dataset Splits Yes We use cross-validation in the following fashion: in each cycle, we use 15 randomly selected video sequences to perform training, and keep the rest 20 for testing. We have recorded 4 demonstrations and used 3 for training and 1 for testing purposes. means and standard deviations obtained by application of leave-one-out crossvalidation
Hardware Specification No The paper mentions devices for data collection (Kinect TM device) and demonstration (NAO robot) but does not provide specific hardware details (like GPU/CPU models, processor types, or memory) used for running the computational experiments.
Software Dependencies No The paper mentions various algorithms and methods (e.g., CD-k, LM, i SVM, CRBM) but does not provide specific software names with version numbers for ancillary software dependencies.
Experiment Setup Yes In all our experiments, the CD-k algorithm is performed with k = 10; all parameters use a gradient ascent learning rate equal to 10 3, except for the autoregressive weights of the im CRBM method, where the learning rate is equal to 10 5. A momentum term is also used: 0.9 of the previously accumulated gradient is added to the current gradient. We use hyperbolic-tangent reservoir neurons, h( ) tanh( ); the reservoir spectral radius is set equal to 0.95.