RTify: Aligning Deep Neural Networks with Human Behavioral Decisions

Authors: Yu-Ang Cheng, Ivan F Rodriguez Rodriguez, Sixuan Chen, Kohitij Kar, Takeo Watanabe, Thomas Serre

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The approach is extensively evaluated against various psychophysics experiments. We also show that the approximation can be used to optimize an ideal-observer RNN model to achieve an optimal tradeoff between speed and accuracy without human data. The resulting model is found to account well for human RT data.
Researcher Affiliation Academia Yu-Ang Cheng 1, Ivan Felipe Rodriguez 1, Sixuan Chen1, Kohitij Kar2, Takeo Watanabe1, Thomas Serre1 1 Brown University 2 York University
Pseudocode Yes Algorithm 1 Forward Function of WW circuit for Random Dot Motion 1: Input: Image, threshold 2: Output: decision times 3: Initialize s1, s2, decision times 4: while s1 < threshold and s2 < threshold do 5: I1, I2 fw(ζ(Image)) 6: Combining excitatory and inhibitory currents 7: H1 f(J11 s1 J12 s2 + I0 + I1 + IOU noise1) 8: H2 f(J22 s2 J21 s1 + I0 + I2 + IOU noise2) 9: Calculate rate of change τs + H1 (1 s1) τs + H2 (1 s2) 12: Update s1, s2, decision times 13: end while 14: return decision times
Open Source Code Yes Code and data are available at https://github.com/ Yu-Ang Cheng/RTify.
Open Datasets Yes We validate our RTify framework on two psychophysics datasets: the RDM dataset [24, 38] and a natural image categorization dataset [39]. ... taken from the COCO dataset [47]
Dataset Splits No Here, histograms are computed for RTs for correct and incorrect trials corresponding to individual experimental conditions (such as coherence levels shown here; see section 3 for details).
Hardware Specification Yes As a side note, all models were trained on single Nvidia RTX GPUs (Titan/3090/A6000) with 24/24/48GB of memory each.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., PyTorch, TensorFlow, Python versions).
Experiment Setup Yes First, we trained an RNN consisting of 5 convolutional blocks (Convolution, Batch Norm, Re LU, Max pooling) and a 4096-unit LSTM with BPTT. ... The RNN was trained for 100 epochs using the Adam optimizer with a learning rate of 1e-4 at full coherence (c = 99.9%) for the first 10 epochs as a warm-up and 1e-5 at all coherence levels for the remaining 90 epochs. Next, we trained our two different RTify modules. For fitting human RTs, it was trained for 10,000 epochs, and for self-penalty, it was trained for 20,000 epochs. In both cases, the Adam optimizer were used, and the weights of the task-optimized RNN were frozen while training the RTify modules.