Active Nearest Neighbor Regression Through Delaunay Refinement
Authors: Alexander Kravberg, Giovanni Luca Marchetti, Vladislav Polianskii, Anastasiia Varava, Florian T. Pokorny, Danica Kragic
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that ANNR outperforms the baseline for both closed-form functions and real-world examples, such as gravitational wave parameter inference and exploration of the latent space of a generative model. (Abstract) and 5. Experiments (Section title) |
| Researcher Affiliation | Academia | 1School of Electrical Engineering and Computer Science, Royal Institute of Technology (KTH), Stockholm, Sweden. |
| Pseudocode | Yes | Algorithm 1 Active Nearest Neighbor Regression (ANNR) |
| Open Source Code | Yes | Our implementation of the ANNR is available at https: //github.com/vlpolyansky/annr. |
| Open Datasets | Yes | We train φ as (the decoder of) a Variational Autoencoder (VAE, Kingma & Welling (2014)) on the MNIST dataset (Deng, 2012) of gray-scale images of hand-written digits (n = 784). (Section 5.3, Latent Manifold Exploration paragraph) and We use the same 6-dimensional formulation of parameter inference as described in (Bodin et al., 2021), and refer the reader to the Appendix for a detailed description of parameters. (Section 5.3, Gravitational Waves paragraph). |
| Dataset Splits | No | The paper describes its testing procedure ("Ptest as an equally-spaced grid in m = 2 dimensions and by uniformly sampling from A when m > 2") but does not provide explicit training or validation dataset splits in terms of percentages or counts, or reference to a standard split. |
| Hardware Specification | Yes | All experiments are performed on CPU Ryzen 9 5950X 16-Core. |
| Software Dependencies | No | The paper mentions various concepts and models (e.g., VAE, k-d trees) but does not list specific software dependencies with their version numbers required for reproduction. |
| Experiment Setup | Yes | In practice, we suggest the following heuristic choice for λ, which we implement in our experiments. We select λ proportional to the size of the domain and inversely proportional to the scale of the function, effectively bringing domain and codomain to the same scale to balance exploration and exploitation of the function: λ = Vol(A) / (max f - min f). (Section 5.1) and in order to deal with practical unboundedness of the density function, we perform an adaptive clipping of extensively sharp volumes. (Section 5.3, Gravitational Waves paragraph) and We deploy a two-dimensional latent space (m = 2) with a standard Gaussian prior. ...the latent prior is additionally encouraged by a hyperparameter β = 2 multiplying the corresponding ELBO loss term (Higgins et al., 2017). (Section 5.3, Latent Manifold Exploration paragraph) |