Learning a 1-layer conditional generative model in total variation

Authors: Ajil Jalal, Justin Kang, Ananya Uppal, Kannan Ramchandran, Eric Price

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Simulations
Researcher Affiliation Academia Ajil Jalal Justin Kang UC Berkeley {ajiljalal, justin_kang}@berkeley.edu Ananya Uppal UT Austin ananya.uppal09@gmail.com Kannan Ramchandran UC Berkeley kannanr@eecs.berkeley.edu Eric Price UT Austin ecprice@cs.utexas.edu
Pseudocode No The paper describes methods and proofs but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https: //github.com/basics-lab/learning Generative Models.git.
Open Datasets No The paper describes generating synthetic data from distributions like x N(0, Ik) and x k Lap(0, 1) or Normal mixtures for simulations, but does not use or provide concrete access information for a pre-existing public dataset.
Dataset Splits No The paper describes generating 'n' samples for its simulations but does not specify any training, validation, or test dataset splits, percentages, or cross-validation setup.
Hardware Specification No The paper does not provide any specific hardware specifications such as GPU or CPU models, memory details, or cloud computing instance types used for running the simulations.
Software Dependencies No The paper mentions using 'MATLAB integral function' but does not specify its version or any other software dependencies with specific version numbers.
Experiment Setup Yes In these experiment, we set d = 1, and plot the results for various values of the number of samples n in Figure 2a and various values of the input dimension k in Figure 2b. For each plot, we fix the true σ = 1 and the w = 1k 1. In each case the MLE is solved via gradient descent with backtracking line search, and we check a first order condition w,σ log pw,σ((y | x)) 2 < δ = 10 3 as the exit condition.