Generative Decoding of Visual Stimuli
Authors: Eleni Miliotou, Panagiotis Kyriakis, Jason D Hinman, Andrei Irimia, Paul Bogdan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the utility of our method in practice, we carry out a series of experimental simulations. To measure the performance of our method, we use both qualitative comparisons of the reconstructions as well as quantitative metrics. In what follows, we give the details of the dataset used, the metrics implemented and baseline comparisons. Ablation Study: We perform an ablation study, with the number of hierarchical layers and, consecutively, the number of brain regions, being the ablated parameter. |
| Researcher Affiliation | Academia | 1Department of Neurology, University of California Los Angeles, Los Angeles, US 2Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, US 3Department of Gerontology, University of Southern California, Los Angeles. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements about making its source code available or links to a code repository. |
| Open Datasets | Yes | Dataset: We applied our pipeline on a commonly used, publicly available dataset known as Generic Object Decoding (GOD). We use the post-processed f MRI data provided by Horikawa et al. (Horikawa & Kamitani, 2017), which contain voxels from 7 brain areas (V1,V2,V3,V4,FFA,PPA,LOC). |
| Dataset Splits | No | The paper explicitly states a training set size ( |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper mentions using a |
| Experiment Setup | Yes | The decoder part of our HVAE transforms the hierarchical latent variables to output images and consists of 4 transposed convolutional layers. The number of decoder filters are [128, 64, 32, 16, 3] and all kernel sizes are set to 5. Each transposed convolutional layer is followed by a 2d batch normalization and a Re LU non-linearity. The metrics saturate at about 800 epochs, which gives us an empirical estimate of how many iterations our model needs to achieve good performance. |