Characterizing Similarity of Visual Stimulus from Associated Neuronal Response

Authors: Vikram Ravindra, Ananth Grama

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We use the functional MRI (f MRI) images of the Sherlock dataset by [Chen et al., 2017], and show strong correlations between our computed archetypes and accompanying video annotations. Furthermore, we show that the archetypes are robust across subjects, which allows us to predict neuronal response of new subjects.
Researcher Affiliation Academia Vikram Ravindra and Ananth Grama Department of Computer Science, Purdue University, West Lafayette, IN ravindrv@purdue.edu
Pseudocode No The paper describes mathematical formulations and algorithmic steps in prose and equations, but it does not contain any structured pseudocode blocks or algorithm listings.
Open Source Code No The paper mentions using a third-party library, fMRI Standard Library (http://fsl.fmrib.ox.ac.uk/fsl), but does not provide any explicit statements about releasing their own source code for the described methodology or links to a repository.
Open Datasets Yes We use the naturalistic movie viewing dataset described in [Chen et al., 2017]. Briefly, the dataset consists of functional MRIs of 17 participants who viewed 50-minutes of the BBC show Sherlock.
Dataset Splits No The paper states, "We split the set of subjects into a training set of 12 subjects and a test set of 5 subjects," but it does not explicitly define a separate validation split (e.g., percentages or counts) of the dataset.
Hardware Specification No The paper describes the fMRI image acquisition protocol (e.g., 3mm x 3mm x 3mm voxels, 1.5s sampling) and preprocessing, but it does not provide specific details about the computational hardware (e.g., GPU models, CPU types, memory specifications) used to run the data analysis or experiments.
Software Dependencies No The paper mentions the use of 'f MRI Standard Library' (with a URL) and 'Principal Convex Hull Analysis algorithm', but it does not provide specific version numbers for any software dependencies required to replicate the analysis.
Experiment Setup No The paper details preprocessing steps (e.g., slice-time correction, motion correction, high-pass filter with 140s cutoff, registration) and some analysis parameters (e.g., consensus threshold of 70%), but it does not provide specific hyperparameters or system-level training settings (like learning rates, batch sizes, or optimizer configurations) typically found in an 'experimental setup' section for model training.