A Theory of Decision Making Under Dynamic Context
Authors: Michael Shvartsman, Vaibhav Srivastava, Jonathan D. Cohen
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We simulate 100,000 trials for each model. Figure 1 shows results from the simulation of the flanker task, recovering the characteristic early below-chance performance in incongruent trials. This simulation supports the assertion that our theory generalizes the flanker model of [5], though we are not sure why our scale on timesteps appears different by about 5x in spite of using what we think are equivalent parameters. For the AX-CPT behavior, we compare qualitative patterns from our model to a heterogeneous dataset of humans performing this task (n=59) across 4 different manipulations with 200 trials per subject [24]. |
| Researcher Affiliation | Academia | Michael Shvartsman Princeton Neuroscience Institute Princeton University Princeton, NJ, 08544 ms44@princeton.edu Vaibhav Srivastava Department of Mechanical and Aerospace Engineering Princeton University Princeton, NJ, 08544 vaibhavs@princeton.edu Jonathan D. Cohen Princeton Neuroscience Institute Princeton University Princeton, NJ, 08544 jdc@princeton.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | A library for simulating tasks that fit in our framework and code for generating all simulation figures in this paper can be found at https://github.com/mshvartsman/cddm. |
| Open Datasets | Yes | For the AX-CPT behavior, we compare qualitative patterns from our model to a heterogeneous dataset of humans performing this task (n=59) across 4 different manipulations with 200 trials per subject [24]. [24] O. Lositsky, R. C. Wilson, M. Shvartsman, and J. D. Cohen, A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task, in The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, pp. 103 107, 2015. |
| Dataset Splits | No | The paper mentions comparing its model to a human dataset and using simulations, but it does not specify explicit training, validation, or testing splits for the data used in the experiments. |
| Hardware Specification | No | The paper describes simulations and modeling but does not provide any specific details about the hardware (e.g., GPU, CPU models, or memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions a library for simulating tasks and code, but it does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | The remainder of parameters are identical across both task simulations: σc = σg = 9, θ = 0.9, µc = µg = 0 for c0 and g0, and µc = µg = 1 for c1 and g1. To replicate the flanker results, we followed [5] by introducing a non-decision error parameter γ = 0.03: this is the probability of making a random response immediately at the first timestep. We simulated 100,000 trials for each model. |