Inferring learning rules from animal decision-making
Authors: Zoe Ashwood, Nicholas A. Roy, Ji Hyun Bak, Jonathan W. Pillow
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. |
| Researcher Affiliation | Academia | 1Princeton Neuroscience Institute, Princeton University 2Dept. of Computer Science, Princeton University 3Redwood Center for Theoretical Neuroscience, UC Berkeley 4Dept. of Psychology, Princeton University |
| Pseudocode | No | The paper describes mathematical models and inference procedures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | we have publicly released our code so as to enable this (https://github.com/pillowlab/psytrack_learning). |
| Open Datasets | Yes | applied our model to data from rats and mice learning perceptual decision-making tasks... data from 13 mice (78,000 trials; 6,000 trials per mouse) learning the IBL task... [11] ... Data available at: https://doi.org/10.6084/m9.figshare.11636748.v7... analyze data from a different animal species (rat) learning a different task [2]... [2] ... Data available at: https: //doi.org/10.6084/m9.figshare.12213671.v1. |
| Dataset Splits | No | The paper mentions validating the framework on simulated data and performing hyperparameter optimization, but it does not provide specific details on training, validation, or test dataset splits (e.g., percentages or sample counts) for the real animal data used in experiments. |
| Hardware Specification | No | The paper does not explicitly describe the hardware (e.g., specific GPU/CPU models, memory, or cloud resources) used to run its experiments. |
| Software Dependencies | No | The paper mentions developing a modeling framework and releasing code, but it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Four hyperparameters were used to generate the weight trajectories, with a learning rate α and a noise strength σ for each weight... We also consider a version of REINFORCE with a constant but weight-specific baseline, {βk}... Each baseline βk is an additional hyperparameter in the model... We compared a model without learning (RF0); a model with REINFORCE learning and a single learning rate α for all weights (RF1); and a model with REINFORCE learning and separate learning rates for all K = 2 weights (RFK)... REINFORCE with baseline (RFβ), with separate learning rates and baselines for different weights. |