Can fMRI reveal the representation of syntactic structure in the brain?
Authors: Aniketh Janardhan Reddy, Leila Wehbe
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using these features and f MRI recordings of participants reading a natural text, we model the brain representation of syntax. First, we find that our syntactic structure-based features explain additional variance in the brain activity of various parts of the language system, even after controlling for complexity metrics that capture processing load. |
| Researcher Affiliation | Academia | Aniketh Janardhan Reddy Machine Learning Department Carnegie Mellon University ajreddy@cs.cmu.edu Leila Wehbe Machine Learning Department Carnegie Mellon University lwehbe@cmu.edu |
| Pseudocode | No | The paper describes the steps for encoding subtrees and the overall model, but it does not include formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data will be available at https://github.com/anikethjr/brain_syntactic_representations. |
| Open Datasets | Yes | We use the f MRI data of 9 subjects reading chapter 9 of Harry Potter and the Sorcerer s Stone [39], collected and made available freely without restrictions by Wehbe et al. [17]. |
| Dataset Splits | Yes | We test the models in a cross-validation loop: the data is first split into 4 contiguous and equal sized folds. Each model uses three folds of the data for training and one fold for evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only describes the fMRI data collection and analysis methods. |
| Software Dependencies | No | The paper mentions several software tools and models like "self-attentive encoder-based constituency parser by Kitaev and Klein [32]", "incremental top-down parser [34]", "wordfreq package [35]", "spaCy English dependency parser [37]", "pretrained cased BERT-large model [21]", "Sub2Vec-DBON by Adhikari et al. [31]", and "sklearn’s Ridge CV module". However, it does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For each voxel of each subject, the regularization parameter is chosen independently. [...] The length of the random walks is set to 100000 and we use an extension of the Distributed Bag of Words (DBOW) model proposed by Le and Mikolov [36] for generating Paragraph Vectors called Sub2Vec-DBON by Adhikari et al. [31]. The sliding window length is set to 5 and the model is trained for 20 epochs. |