Neural Sampling in Hierarchical Exponential-family Energy-based Models
Authors: Xingsi Dong, Si Wu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Sec.5, we validate the capabilities of the HEE model using 2D synthetic datasets and Fashion MNIST [30]. Then, we incorporate receptive field as the biological constrains to the HEE model training on CIFAR10 [31]. We show that the HEE model can achieve a performance comparable with previous EBMs [27]. We also investigate the neural representation of semantic information, including orientation, color and category, which exhibit similarities to biological visual systems. And the neural adaptation can trigger neural phenomena including oscillations and transient which are widely observed in biological systems. |
| Researcher Affiliation | Academia | Xingsi Dong1,2,3 dxs19980605@pku.edu.cn Si Wu1,2.3 siwu@pku.edu.cn 1. PKU-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies. 2. School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University. 3. IDG/Mc Govern Institute for Brain Research. Center of Quantitative Biology, Peking University. |
| Pseudocode | No | The paper describes the model's dynamics and processes through mathematical equations and narrative text, but it does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the source code for the methodology described in this paper, nor does it provide a direct link to such a repository. While BrainPy is mentioned, it is not stated as the repository for this paper's specific code. |
| Open Datasets | Yes | In Sec.5, we validate the capabilities of the HEE model using 2D synthetic datasets and Fashion MNIST [30]. Then, we incorporate receptive field as the biological constrains to the HEE model training on CIFAR10 [31]. |
| Dataset Splits | No | The paper mentions using Fashion MNIST and CIFAR10 datasets but does not explicitly provide specific details on the training, validation, and test dataset splits, such as percentages, sample counts, or explicit references to predefined splits within the text. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions BrainPy as a framework used but does not provide specific software dependencies with version numbers, such as Python versions, deep learning framework versions (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | No | The paper describes general model architecture details, such as "fully connected architecture" and "HEE-NL-A with layers L = 10", but it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer settings, or other system-level training configurations. |