Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Statistical Inference with Unnormalized Discrete Models and Localized Homogeneous Divergences

Authors: Takashi Takenouchi, Takafumi Kanamori

JMLR 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments showed that the proposed estimator comparably performs to the maximum likelihood estimator but with drastically lower computational cost. Keywords: unnormalized model, homogeneous divergence, empirical localization, discrete model
Researcher Affiliation Academia Takashi Takenouchi EMAIL Future University Hakodate RIKEN Center for Advanced Intelligence Project 116-2 Kamedanakano, Hakodate, Hokkaido, 040-8655, Japan Takafumi Kanamori EMAIL Department of Computing and Software Systems, Nagoya University RIKEN Center for Advanced Intelligence Project Furocho, Chikusaku, Nagoya 464-8603, Japan
Pseudocode No The paper describes methods and theoretical properties but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository or supplementary materials containing code.
Open Datasets Yes In numerical experiments, training samples were generated from the Poisson distribution having the probability function pθ(x) = exθ / e^eθ for x = 0, 1, 2, . . . , where θ is the natural parameter. ... The dimension d of input was set to 10 and the synthetic dataset was randomly generated from the second order Boltzmann machine (Example 3) with a parameter θ N(0, I/d).
Dataset Splits Yes For each dimension d = 2, 3, . . . , 21, we generated a dataset containing n = 50 2k(k = 1, . . . , 9) examples from the fully visible Boltzmann machine... Figure 8 (b) shows median of averaged log-likelihoods of each method for test dataset consists of 10000 examples, over 50 trials.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No All methods were optimized with the optim function in R language (R Core Team, 2015). While R language is mentioned, a specific version number for R itself or for the 'optim' function is not provided.
Experiment Setup Yes In numerical experiments, the parameters of Sα,α was set to α = 1.1, α = 0.1. ... we compared the proposed estimator with parameter settings (α, α ) = (1.01, 0.01), (1.01, 0.01), (2, 1)... To overcome the degrade of performance of the proposed estimator caused by lack of example patterns, we consider a regularized version of the proposed estimator as, Sα,α ( p, qθ) + λ/2n||θ||2. ... and λ = 0, 10-2, 10-4). ... An initial value of the parameter was set by N(0, I) and commonly used by all methods.