Bounded rationality in structured density estimation
Authors: Tianyuan Teng, Kevin Li, Hang Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this study, we explore how these learned distributions deviate from the ground truth, resulting in observable inconsistency in a novel structured density estimation task. During each trial, human participants were asked to learn and report the latent probability distribution functions underlying sequentially presented independent observations. As the number of observations increased, the reported predictive density became closer to the ground truth. Nevertheless, we observed an intriguing inconsistency in human structure estimation, specifically a large error in the number of reported clusters. Such inconsistency is invariant to the scale of the distribution and persists across stimulus modalities. |
| Researcher Affiliation | Academia | Tianyuan Teng Center for Life Sciences, Peking University tengtianyuan@pku.edu.cn Li Kevin Wenliang , Gatsby Computational Neuroscience Unit, University College London kevinli@gatsby.ucl.ac.uk Hang Zhang School of Psychological and Cognitive Sciences, Peking University hang.zhang@pku.edu.cn |
| Pseudocode | No | The paper includes a schematic diagram (Figure 3) illustrating the framework, but it does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | No | The paper describes the generation of data for the experiments (e.g., "The true generative distribution set was composed of 24 distributions"), but it does not provide concrete access information (specific link, DOI, repository name, formal citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into training, validation, and test sets. |
| Hardware Specification | Yes | During training, we estimate the log-likelihood using M = 10 000 parallel simulations on NVIDIA GTX 1080 and A100 GPUs. |
| Software Dependencies | No | The paper mentions "Py Torch [26]" but does not specify a version number for it or any other ancillary software. |
| Experiment Setup | Yes | During training, we estimate the log-likelihood using M = 10 000 parallel simulations on NVIDIA GTX 1080 and A100 GPUs. The Nelder-Mead routine typically converges to a 0.001 relative precision on parameters within 300 iterations. We restart Nelder-Mead 10 times with parameters found from the previous optimization, as this avoids early convergence of Nelder-Mead. To further avoid local optima, we repeat this whole procedure (with 10 restarts) 10 times with different random seeds. |