Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Rebuilding Factorized Information Criterion: Asymptotically Accurate Marginal Likelihood

Authors: Kohei Hayashi, Shin-ichi Maeda, Ryohei Fujimaki

ICML 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A demonstrative study on Bayesian principal component analysis is provided and numerical experiments support our theoretical results.
Researcher Affiliation Collaboration Kohei Hayashi EMAIL Global Research Center for Big Data Mathematics, National Institute of Informatics Kawarabayashi Large Graph Project, ERATO, JST Shin-ichi Maeda EMAIL Graduate School of Informatics, Kyoto University Ryohei Fujimaki EMAIL NEC Knowledge Discovery Research Laboratories
Pseudocode Yes Algorithm 1 The g FAB algorithm
Open Source Code No The paper does not provide any statements about the availability of open-source code or links to repositories.
Open Datasets No The paper uses synthetic data: 'We used the synthetic data X = ZW + E where W uniform([0, 1])4, Z N(0, I), and End N(0, σ2). Under the data dimensionality D = 30 and the true model K = 10, we generated data with N = 100, 500, 1000, and 2000.' No concrete access information for a publicly available dataset is provided.
Dataset Splits No The paper describes generating synthetic data for different sample sizes (N), but does not specify any training, validation, or test dataset splits or cross-validation setup.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processors, or memory used for the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup Yes We stopped the algorithms if the relative error was less than 10 5 or the number of iterations was greater than 104.