Model-Agnostic Fits for Understanding Information Seeking Patterns in Humans
Authors: Soumya Chatterjee, Pradeep Shenoy784-791
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here, we reexamine data from previous carefully designed experiments, collected at scale, that measured and catalogued these biases in aggregate form. We design deep learning models that replicate these biases in aggregate, while also capturing individual variation in behavior. We apply our methods to a recent large study cataloguing biases in human information seeking, integration, and decision-making (Hunt et al. 2016), and demonstrate the following results: Data were split into 60/40 per subject for training and validation 6-7 trials of training data per subject per task (2x for Multi-DNN). We examined the relationship between quality of fit for each individual subject, and the number of subjects in our training pool, to check whether larger subject pools indeed help learn better predictive models. In this experiment, we measured simulation accuracy on held-out trials for a fixed cohort of subjects designated as test-subjects. |
| Researcher Affiliation | Collaboration | Soumya Chatterjee1*, Pradeep Shenoy 2 1 Indian Institute of Technology Bombay 2 Google Research India soumya@cse.iitb.ac.in, shenoypradeep@google.com |
| Pseudocode | No | The paper describes the model architecture and training process in text and with a diagram (Figure 2), but it does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states that the *data* used was released by other authors (Hunt et al. 2017), but there is no explicit statement or link provided by the authors of *this* paper regarding the open-sourcing of their deep learning model's code or methodology. |
| Open Datasets | Yes | We model data from a cognitive experiment conducted on a large-scale mobile-phone-based experimental platform (Brown et al. 2014) to probe decision-making under uncertainty (Hunt et al. 2016)1. Footnote 1: All data were released by authors as part of the publication (Hunt et al. 2017) under the CC0-1.0 license. (In references: Hunt, L. T.; Rutledge, R. B.; Malalasekera, W. M. N.; Kennerley, S. W.; and Dolan, R. J. 2017. Data from: Approach-induced biases in human information sampling. Dryad, Dataset URL: https://datadryad.org/stash/dataset/doi:10.5061/dryad.nb41c, last accessed May 2020.) |
| Dataset Splits | Yes | Data were split into 60/40 per subject for training and validation 6-7 trials of training data per subject per task (2x for Multi-DNN). |
| Hardware Specification | No | The paper mentions that the *data* was collected on a “mobile-phone-based experimental platform” but does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used to train or run their deep learning models. |
| Software Dependencies | No | The paper mentions using the “Adam optimizer” but does not specify version numbers for any software components, libraries, or programming languages used in their implementation. |
| Experiment Setup | Yes | Training used Adam optimizer with learning rate 0.003 and batch size 256, for 30 epochs with early stopping. All activation functions are tanh(). Each task network has separate layers per task stage... a) 2 fully connected layers of dimension 10 to produce the next hidden state, b) 2 additional single fully connected layers on top of this hidden state, one each for producing the decision outputs of that state. |