Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Using Inherent Structures to design Lean 2-layer RBMs

Authors: Abhishek Bansal, Abhinav Anand, Chiranjib Bhattacharyya

ICML 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on synthetic datasets to verify our claim. Our main goals are to experimentally verify Theorems 1, 2 and Corollaries 3 4. All experiments were run on CPU with 2 Xeon Quad-Core processors (2.60GHz 12MB L2 Cache) and 16GB memory running Ubuntu 16.02 7. To validate the claim made in Corollary 3 we considered training a DBM with two hidden layers on the MNIST dataset.
Researcher Affiliation Collaboration 1IBM Research 2Dept of CSA, IISc, Bengaluru, India.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 7The source code and instructions to run is available at http: //mllab.csa.iisc.ernet.in/publications.
Open Datasets Yes To validate the claim made in Corollary 3 we considered training a DBM with two hidden layers on the MNIST dataset. We initialized weights and biases of each RBM architecture randomly and then performed gibbs sampling for 5000 steps to generate a synthetic dataset of 60,000 points.
Dataset Splits No The paper mentions using 'test data' for evaluation but does not specify details for a separate validation split or cross-validation methodology.
Hardware Specification Yes All experiments were run on CPU with 2 Xeon Quad-Core processors (2.60GHz 12MB L2 Cache) and 16GB memory running Ubuntu 16.02 7.
Software Dependencies No The paper mentions 'Ubuntu 16.02' but does not specify any particular software libraries, frameworks, or their version numbers used for the experiments.
Experiment Setup Yes We initialized weights and biases of each RBM architecture randomly and then performed gibbs sampling for 5000 steps to generate a synthetic dataset of 60,000 points. To estimate the models partition function we used 20,000 Îēk spaced uniformly from 0 to 1.0.