Learning structured densities via infinite dimensional exponential families

Authors: Siqi Sun, Mladen Kolar, Jinbo Xu

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation studies illustrate ability of our procedure to recover the true graph structure without the knowledge of the data generating process. and Finally the results for simulated data are presented to demonstrate the correctness of our algorithm empirically. 5 Experiments We illustrate performance of our method on two simulations.
Researcher Affiliation Academia Siqi Sun TTI Chicago siqi.sun@ttic.edu Mladen Kolar University of Chicago mkolar@chicagobooth.edu Jinbo Xu TTI Chicago jinbo.xu@gmail.com
Pseudocode No The paper describes the algorithm used but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes 1Please visit ttic.uchicago.edu/ siqi for supplementary material and code.
Open Datasets No We use the same sampling method as in [31] to generate the data: we set Ωs = 0.4 for s S and its diagonal to a constant such that Ωis positive definite. We set the dimension d to 25 and change the sample size n {20, 40, 60, 80, 100} data points. The paper describes how the data was generated rather than providing access to a pre-existing public dataset.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits, percentages, or explicit sample counts for each split.
Hardware Specification No This work was completed in part with resources provided by the University of Chicago Research Computing Center. This statement is too general and does not specify hardware details.
Software Dependencies No The paper mentions 'existing group lasso solvers' and 'the huge package for high-dimensional undirected graph estimation in r', but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes In our experiments, we use the same kernel defined as follows: k(x, y) = exp( x y 2 2 2σ2 ) + r(x T y + c)2, that is, the summation of a Gaussian kernel and a polynomial kernel. We set σ2 = 1.5, r = 0.1 and c = 0.5 for all the simulations. and We set the dimension d to 25 and change the sample size n {20, 40, 60, 80, 100} data points.