Autoconj: Recognizing and Exploiting Conjugacy Without a Domain-Specific Language
Authors: Matthew D. Hoffman, Matthew J. Johnson, Dustin Tran
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we provide code snippets and empirical results to demonstrate Autoconj s functionality, as well as the benefits of being embedded in Python as opposed to a more narrowly focused domain-specific language. Figure 2: Comparison of algorithms for Bayesian factor analysis according to their estimate of the expected log-joint as a function of runtime. Table 1: Time to run 500 iterations of variational inference on a mixture of Gaussians. |
| Researcher Affiliation | Industry | Matthew D. Hoffman Google AI mhoffman@google.com Matthew J Johnson* Google Brain mattjj@google.com Dustin Tran Google Brain trandustin@google.com |
| Pseudocode | No | The paper provides several Python code listings (Listing 1-5) as examples of usage and implementation details, but these are concrete code, not pseudocode, and are not labeled as 'Algorithm' blocks. |
| Open Source Code | Yes | Autoconj (including experiments) is available at https://github.com/google-research/autoconj. |
| Open Datasets | No | The paper describes generating data from a linear factor model for the factor analysis experiment and a mixture-of-Gaussians model for benchmarking, but it does not provide access information (links, DOIs, formal citations) for any public datasets used in experiments. |
| Dataset Splits | No | The paper does not specify train, validation, or test dataset split percentages or sample counts for any of its experiments. |
| Hardware Specification | No | Table 1 mentions '1 CPU', '6 CPU', and '1 GPU' but does not specify the exact models or types of CPU or GPU used, nor does it provide other hardware specifications like memory. |
| Software Dependencies | No | The paper mentions software like Num Py, Autograd, scipy.optimize, and Tensor Flow but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | Table 1 states 'Time to run 500 iterations of variational inference'. Listing 4 shows 'for iteration in range(100):' for Bayesian logistic regression. Listing 5 provides model parameters such as 'num_examples = 50', 'num_features = 10', 'num_latents = 5', 'alpha = 2.', 'beta = 8.' for the mixture-of-Gaussians model. |