Variational Bayesian Decision-making for Continuous Utilities
Authors: Tomasz Kuśmierczyk, Joseph Sakaya, Arto Klami
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide practical strategies for approximating and maximizing the gain, and empirically demonstrate consistent improvement when calibrating approximations for specific utilities. ... We demonstrate the technique in predictive machine learning tasks on the eight schools model [9, 29] and probabilistic matrix factorization on media consumption data. |
| Researcher Affiliation | Academia | Tomasz Ku smierczyk Joseph Sakaya Arto Klami Helsinki Institute for Information Technology HIIT Department of Computer Science, University of Helsinki {tomasz.kusmierczyk,joseph.sakaya,arto.klami}@helsinki.fi |
| Pseudocode | No | The paper describes algorithms but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for reproducing all experiments (with additional figures) is available online1. 1https://github.com/tkusmierczyk/lcvi |
| Open Datasets | Yes | We demonstrate the technique in predictive machine learning tasks on the eight schools model [9, 29] and probabilistic matrix factorization on media consumption data. ... The eight schools model [9, 29] is a simple Bayesian hierarchical model... We demonstrate LCVI in a prototypical matrix factorization task, modeling the Last.fm data set [3] |
| Dataset Splits | No | The paper states: 'We randomly split the matrix entries into even-sized training and evaluation sets', but does not provide specific percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | No | The paper mentions 'computational resources' in the acknowledgements but does not provide specific hardware details. |
| Software Dependencies | No | The paper mentions software like 'Adam [13]' and 'Stan [5]' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Whenever not stated differently, we used joint optimization of {h} and λ with Adam [13] (learning rate set to 0.01) ran until convergence (20k epochs for hierarchical model and 3k epochs for matrix factorization with minibatches of 100 rows). |