Importance Weighted Kernel Bayes’ Rule
Authors: Liyuan Xu, Yutian Chen, Arnaud Doucet, Arthur Gretton
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our KBR on challenging synthetic benchmarks, including a filtering problem with a state-space model involving high dimensional image observations. The proposed method yields uniformly better empirical performance than the existing KBR, and competitive performance with other competing methods. ... In this section, we empirically investigate the performance of our KBR estimator in a variety of settings, including the problem of learning posterior mean proposed in Fukumizu et al. (2013), as well as challenging filtering problems where the observations are high-dimensional images. |
| Researcher Affiliation | Collaboration | 1Gatsby Unit 2Deep Mind. Correspondence to: Liyuan Xu <liyuan.jo.19@ucl.ac.uk>. |
| Pseudocode | Yes | Algorithm 1 Importance Weighted Kernel Bayes Rule |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Deepmind Lab (Beattie et al., 2016)... d Sprite (Matthey et al., 2017), which is a dataset of 2D shape images... |
| Dataset Splits | Yes | These are selected using the last 200 steps of the training sequence as a validation set. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | In this experiment, we set η = λ = 0.2 and used Gaussian kernels for both KBR methods, where the bandwidth is given by the median trick. ... For both approaches, we used Gaussian kernels k X ,k Z whose bandwidths are set to βDX and βDZ, respectively... We used the Ku LSIF leave-one-out cross-validation procedure (Kanamori et al., 2012) to tune the regularization parameter η, and set λ = η. |