Deep Density Destructors
Authors: David Inouye, Pradeep Ravikumar
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate our framework on a 2D dataset, MNIST, and CIFAR-10. |
| Researcher Affiliation | Academia | 1Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA. Correspondence to: David I. Inouye <dinouye@cs.cmu.edu>. |
| Pseudocode | No | The paper describes algorithms (e.g., a greedy training algorithm) but does not present them in a structured pseudocode or algorithm block format. |
| Open Source Code | Yes | Code is available on first author s website. |
| Open Datasets | Yes | We now give some initial results using the MNIST and CIFAR-10 datasets to show that it is possible to train density destructor models on larger datasets. We base our experiments on the unconditional (i.e. unsupervised) MNIST and CIFAR-10 experiments in (Papamakarios et al., 2017) and use the same preprocessed data as in (Papamakarios et al., 2017). |
| Dataset Splits | Yes | We implement a simple non-parametric greedy algorithm which merely estimates a density at each layer, transforms the training data via the associated destructor and repeats this process until the likelihood on a held-out validation set decreases. |
| Hardware Specification | Yes | Note that the timings for the baselines from (Papamakarios et al., 2017) are based on using a Titan X GPU whereas our methods merely use at most 10 CPUs. |
| Software Dependencies | No | The paper mentions using "Python sci-kit learn library (Pedregosa et al., 2011) and mlpack (Curtin et al., 2013)" but does not provide specific version numbers for these libraries. |
| Experiment Setup | Yes | We build a canonical destructor by composing the independent standard normal inverse CDF, a random linear projection, a standard normal CDF (which returns the values to the unit hypercube) and finally an independent histogram on the unit hypercube: Fhist(Φ(ArandΦ 1(x))). The histogram was estimated with 20 bins and a regularization parameter (pseudo-counts) α in the set {0.1, 1, 10}. |