Beyond black box densities: Parameter learning for the deviated components

Authors: Dat Do, Nhat Ho, XuanLong Nguyen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation studies are carried out to illustrate the theory. In Section 4, multiple simulation experiments are carried out to support the theory.
Researcher Affiliation Academia Dat Do Department of Statistics University of Michigan at Ann Arbor Ann Arbor, MI 48109 dodat@umich.edu Nhat Ho Department of Statistics and Data Sciences University of Texas at Austin Austin, TX 78712 minhnhat@utexas.edu Xuan Long Nguyen Department of Statistics University of Michigan at Ann Arbor Ann Arbor, MI 48109 xuanlong@umich.edu
Pseudocode No The paper does not contain any explicitly labeled 'Algorithm' or 'Pseudocode' blocks.
Open Source Code No The main text of the paper does not contain an unambiguous statement of code release for the methodology described, nor does it provide a direct link to a code repository. The checklist item 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]' refers to an internal review process, not a public-facing statement within the paper's narrative.
Open Datasets No The paper states, 'For each n, we simulate n data points from true model (9)', indicating synthetic data generation rather than the use of a publicly available dataset with a provided source or citation.
Dataset Splits No The paper describes simulating 'n data points' and estimating parameters but does not specify exact training, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation methodology).
Hardware Specification No The paper states 'The experiments are run on CPU s only' in a checklist item but does not provide specific CPU models, processor types, or any other detailed hardware specifications.
Software Dependencies No The paper mentions 'Normalizing Flow neural network [11]... (Masked Autoregressive architecture)' and refers to PyTorch in a citation, but it does not specify exact version numbers for PyTorch or any other software libraries used in the implementation or experiments.
Experiment Setup No The paper mentions that the Normalizing Flow neural network has '5 layers' and that '16 replications' were conducted, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer settings) or other detailed system-level training configurations within the main text.