Visual Encoding and Decoding of the Human Brain Based on Shared Features
Authors: Chao Li, Baolin Liu, Jianguo Wei
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on a public f MRI dataset confirm the rationality of the encoding models, and comparing with a recently proposed method, our reconstruction results obtain significantly higher accuracy. |
| Researcher Affiliation | Academia | Chao Li*1, Baolin Liu* 2 and Jianguo Wei1 1 College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China 2 School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China |
| Pseudocode | No | The paper describes its methods in prose and mathematical formulas, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code. |
| Open Source Code | Yes | The code for 𝑓𝜃 is available at https://github.com/Nehemiah-Li/Deep-image-prior. |
| Open Datasets | Yes | In this study, we use the f MRI dataset collected by Kay et al. [Kay et al., 2008, 2011; Naselaris et al., 2009]. |
| Dataset Splits | Yes | In our study, we further divide the training set into two parts: training set 𝑇𝑟𝑛1 is used to train the encoding models, which contains 1,575 images; training set 𝑇𝑟𝑛2 is used to evaluate the models, which contains 175 images. |
| Hardware Specification | No | The paper mentions that 'a 4T scanner was used to obtain f MRI data' and 'We extract features from pre-trained Alex Net with Caffe version', but it does not specify any hardware used for running the computational experiments such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions 'pre-trained Alex Net with Caffe version', but it does not provide specific version numbers for Caffe or any other software libraries or frameworks used for its experiments. |
| Experiment Setup | Yes | The paper describes specific experimental settings such as using 'Lasso regression', 'non-negative dictionary' training, 'L2 regularization term', and utilizing 'deep image prior' within a 'randomly initialized hourglass network with 6 layers'. It also mentions '𝜆𝛼 and 𝛽 are constants used to control the sparsity and variance of 𝜶, respectively. 𝛾 is used to control the variance of 𝓜 𝒑𝒐𝒐𝒍𝟏' and '𝜎𝑝 and 𝜎𝑓 are the coefficients of the two loss terms, respectively.' |