Bayesian Models of Data Streams with Hierarchical Power Priors
Authors: Andrés Masegosa, Thomas D. Nielsen, Helge Langseth, Darı́o Ramos-López, Antonio Salmerón, Anders L. Madsen
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The appropriateness of the approach is investigated through experiments using both synthetic and real-life data, giving encouraging results. |
| Researcher Affiliation | Collaboration | 1Department of Mathematics, Unversity of Almer ıa, Almer ıa, Spain 2Department of Computer and Information Science, Norwegian University of Science and Technology, Trondheim, Norway 3Department of Computer Science, Aalborg University, Aalborg, Denmark 4Hugin Expert A/S, Aalborg, Denmark. |
| Pseudocode | No | The paper describes methodological steps but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The proposed methods are released as part of an open-source toolbox for scalable probabilistic machine learning (http://www.amidsttoolbox.com) (Masegosa et al., 2017; 2016b; Caba nas et al., 2016). |
| Open Datasets | Yes | Electricity Market (Harries, 1999): The data set describes the electricity market of two Australian states. |
| Dataset Splits | Yes | Specifically, each data batch is randomly split in a train data set, xt, and a test data set, xt, containing two thirds and one third of the data batch, respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments. |
| Software Dependencies | No | The paper mentions software like the VMP algorithm and the AMIDST toolbox but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The underlying variational engine is the VMP algorithm (Winn & Bishop, 2005) for all models; VMP was termi-nated after 100 iterations or if the relative increase in the lower bound fell below 0.01%. All priors were uninformative, using either flat Gaussians, flat Gamma priors or uniform Dirichlet priors. We set γ = 0.1 for the HPP priors. Variational parameters were randomly initialized using the same seed for all methods. |