Neuronal Gaussian Process Regression
Authors: Johannes Friedrich
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | I applied my derived neuronal networks to the Snelson dataset [44], that has been widely used for SGPR. Throughout I considered ten 50:50 train/test splits. I next evaluated the performance of my Bio NN on larger and higher dimensional data. I replicate the experiment set-up in [12] and compare to the predictive log-likelihood of Probabilistic Back-propagation [12] and Monte Carlo Dropout [13] on ten UCI datasets [45], cf. Table 1. |
| Researcher Affiliation | Industry | Johannes Friedrich Center for Computational Neuroscience Flatiron Institute New York, NY 10010 jfriedrich@flatironinstitute.org |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | (My source code can be found at https://github.com/j-friedrich/neuronal GPR). |
| Open Datasets | Yes | I applied my derived neuronal networks to the Snelson dataset [44], that has been widely used for SGPR. I next evaluated the performance of my Bio NN on larger and higher dimensional data. I replicate the experiment set-up in [12] and compare to the predictive log-likelihood of Probabilistic Back-propagation [12] and Monte Carlo Dropout [13] on ten UCI datasets [45], cf. Table 1. |
| Dataset Splits | Yes | Throughout I considered ten 50:50 train/test splits. I used the negative log-likelihood as loss function and performed 40 passes over the available training data using the Adam optimizer [47] with learning rate tuned by splitting the training data into a new 80:20 train/validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like GPy and Adam optimizer, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Throughout I considered ten 50:50 train/test splits. I first studied how the synaptic weights can be learned online by performing the synaptic plasticity update for each presented data pair (xi, yi), passing multiple times over the training data. ... I used the negative log-likelihood as loss function and performed 40 passes over the available training data using the Adam optimizer [47] with learning rate tuned by splitting the training data into a new 80:20 train/validation split. ... For each train/test split the 6 tuning curve centers were initialized on a regular grid at {0.5, 1.5, ..., 5.5} and updated to minimize the squared prediction error. |