Deep Homogeneous Mixture Models: Representation, Separation, and Approximation
Authors: Priyank Jaini, Pascal Poupart, Yaoliang Yu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on both synthetic and real datasets confirm the benefits of depth in density estimation. |
| Researcher Affiliation | Academia | Priyank Jaini Department of Computer Science & Waterloo AI Institute University of Waterloo pjaini@uwaterloo.ca Pascal Poupart University of Waterloo, Vector Institute & Waterloo AI Institute ppoupart@uwaterloo.ca Yaoliang Yu Department of Computer Science & Waterloo AI Institute University of Waterloo yaoliang.yu@uwaterloo.ca |
| Pseudocode | No | The paper describes algorithms but does not provide pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions adapting code from another work ('HT-TMM') but does not state that its own developed code ('SPN-CG') is open-source or provide a link to it. |
| Open Datasets | Yes | We perform experiments on MNIST [15] for digit classification and small NORB [16] for 3D object recognition |
| Dataset Splits | No | The paper mentions using MNIST and NORB datasets and adapting experiments from [26], but it does not explicitly state the training, validation, or test splits used within its text. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using an 'Adam SGD variant' for training but does not specify any software packages, libraries, or their version numbers. |
| Experiment Setup | Yes | For each iteration, we train the network using an Adam SGD variant with a base learning rate of 0.03 and momentum parameters β1 = β2 = 0.9. For each added network structure, we train the model for 22,000 iterations for MNIST and 40,000 for NORB. |