Empirical Localization of Homogeneous Divergences on Discrete Sample Spaces
Authors: Takashi Takenouchi, Takafumi Kanamori
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6 Experiments. We especially focus on a setting of β = 1, i.e., convexity of the risk function with the unnormalized model exp(θT ϕ(x)) holds (Theorem 1) and examined performance of the proposed estimator. 6.1 Fully visible Boltzmann machine. In the first experiment, we compared the proposed estimator with parameter settings (α, α ) = (1.01, 0.01), (1.01, 0.01), (2, 1), with the MLE and the ratio matching method [8]. ... Figure 1 (a) shows median of the root mean square errors (RMSEs) between θ and ˆθ of each method over 50 trials, against the number n of examples. |
| Researcher Affiliation | Academia | Takashi Takenouchi Department of Complex and Intelligent Systems Future University Hakodate 116-2 Kamedanakano, Hakodate, Hokkaido, 040-8655, Japan ttakashi@fun.ac.jp Takafumi Kanamori Department of Computer Science and Mathematical Informatics Nagoya University Furocho, Chikusaku, Nagoya 464-8601, Japan kanamori@is.nagoya-u.ac.jp |
| Pseudocode | No | The paper describes methods and derivations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any statements about releasing source code or provide links to a code repository for the methodology described. |
| Open Datasets | No | The paper states: 'the synthetic dataset was randomly generated from the second order Boltzmann machine (Example 2) with a parameter θ N(0, I)', but does not provide any access information (link, DOI, repository, or formal citation) for this dataset to be publicly available. |
| Dataset Splits | No | The paper does not specify exact split percentages, absolute sample counts for each split, or reference predefined splits with citations for training, validation, or test datasets. |
| Hardware Specification | No | The paper mentions 'All methods were optimized with the optim function in R language [16]' but provides no specific details regarding the hardware used for these optimizations or experiments, such as CPU/GPU models, memory, or specific computing environments. |
| Software Dependencies | Yes | All methods were optimized with the optim function in R language [16]. |
| Experiment Setup | Yes | In the first experiment, we compared the proposed estimator with parameter settings (α, α ) = (1.01, 0.01), (1.01, 0.01), (2, 1), with the MLE and the ratio matching method [8]. ... An initial value of the parameter was set by N(0, I) and commonly used by all methods. |