Learning Exponential Families from Truncated Samples

Authors: Jane Lee, Andre Wibisono, Emmanouil Zampetakis

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To save space in the main body for exposition of the theoretical results, we ve included the results and details in Appendix E. In the end, all have (average) L2 error at most 0.15. For stability (and to bypass repeating the algorithm multiple times as stated in the analysis), we instead calculated gradients using the average of 10 samples which was sufficient to have stable training results. See Figure 2.
Researcher Affiliation Academia Jane H. Lee Department of Computer Science Yale University jane.h.lee@yale.eduAndre Wibisono Department of Computer Science Yale University andre.wibisono@yale.eduManolis Zampetakis Department of Computer Science Yale University emmanouil.zampetakis@yale.edu
Pseudocode Yes Algorithm 1 Projected SGD Algorithm Given Truncated Samples Algorithm 2 Sample Gradient
Open Source Code No The paper does not contain an explicit statement about the availability of its source code or a link to a code repository.
Open Datasets No To illustrate how the algorithm performs in different dimensions, we implemented our algorithm for 2-, 5-, 10-, and 20-dimensional exponential distributions. In all cases, the truncation set is the (hyper-)cube [0, 2]d.
Dataset Splits No The paper does not specify explicit training, validation, or test dataset splits. It only mentions '1500 iterations' and 'each repeated 10 times' in the numerical example.
Hardware Specification No The paper mentions 'wall clock time' for training but does not specify any details about the hardware used (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes In all cases, we use 1500 iterations and step size 0.01, each repeated 10 times.