Neural Taskonomy: Inferring the Similarity of Task-Derived Representations from Brain Activity

Authors: Aria Wang, Michael Tarr, Leila Wehbe

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Encoding models based on task features predict activity in different regions across the whole brain. Features from 3D tasks such as keypoint/edge detection explain greater variance compared to 2D tasks a pattern observed across the whole brain. Using results across all 21 task representations, we constructed a task graph based on the spatial layout of well-predicted brain areas from each task.
Researcher Affiliation Academia Aria Y. Wang Carnegie Mellon University ariawang@cmu.edu Michael J. Tarr Carnegie Mellon University michaeltarr@cmu.edu Leila Wehbe Carnegie Mellon University lwehbe@cmu.edu
Pseudocode No The paper describes its methods and refers to an implementation in PyTorch, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes 1All code is available on https://github.com/ariaaay/Neural Taskonomy
Open Datasets Yes The images used in this paper are from a publicly available large-scale f MRI dataset, BOLD5000[22]. Images in the BOLD5000 dataset were chosen from standard computer vision datasets (Image Net[23], COCO[24] and SUN[25]).
Dataset Splits Yes These 4916 image trials are separated into random training, validation, and testing sets during model fitting. For each subject, each voxel s regularization parameter was chosen independently via 7-fold cross-validation based on the prediction performance of the validation data.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions that the ridge regression model was "implemented in Py Torch; see[21]", but it does not specify a version number for PyTorch or any other software dependency.
Experiment Setup Yes To explore how and where visual features are represented in human scene processing, we extracted different features spaces describing each of the stimulus images and used them in an encoding model to predict brain responses. ... For each subject, each voxel s regularization parameter was chosen independently via 7-fold cross-validation based on the prediction performance of the validation data. ... Model performance was evaluated on the test data using both Pearson s correlation and coefficient of determination (R2).