Unsupervised Learning with Truncated Gaussian Graphical Models

Authors: Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, Lawrence Carin

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results are provided to validate the proposed models and demonstrate their superiority over competing models.
Researcher Affiliation Academia Department of Electrical & Computer Engineering Duke University Durham, NC 27708-0291
Pseudocode No The paper describes algorithms and formulations using mathematical equations and textual descriptions, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the methodology's source code.
Open Datasets Yes We report experimental results of the RTGGM models on various publicly available data sets, including binary, count and real-valued data, and compare them to competing models. The binarized versions of MNIST and Caltech 101 Silhouettes data sets are considered. The two corpora are preprocessed as in (Hinton and Salakhutdinov 2009). An RTGGM with 50 hidden nodes is trained.
Dataset Splits No The paper specifies training and testing sets, but does not explicitly describe a separate validation dataset split or its purpose.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'RMSprop' as an optimizer but does not specify any software names with version numbers for reproducibility (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes For all RTGGM models considered below, we use x(0) and x(25) to get a CD-based gradient estimate and then use RMSprop to update the model parameters, with the RMSprop delay set to 0.95. The learning rate is set to 10 4 and the precision di is set to 5.