Surround Modulation: A Bio-inspired Connectivity Structure for Convolutional Neural Networks
Authors: Hosein Hasani, Mahdieh Soleymani, Hamid Aghajan
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the performance of the proposed method, we set up three types of experiments. First, we apply the proposed model to a standard image classification task. Then, we evaluate the robustness of the model in different visual situations. Finally, we analyze the characteristics of neural activities in the presence of surround modulation and compare the results with those reported for the visual cortex. |
| Researcher Affiliation | Academia | Hosein Hasani Department of Electrical Engineering Sharif University of Technology hasani.hosein@ee.sharif.edu Mahdieh Soleymani Baghshah Department of Computer Engineering Sharif University of Technology soleymani@sharif.edu Hamid Aghajan Department of Electrical Engineering Sharif University of Technology aghajan@ee.sharif.edu |
| Pseudocode | No | The paper describes the Surround Modulation (SM) kernel using equations and textual descriptions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing the source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We used the Image Net dataset [10] for object recognition as it contains natural images with an acceptable resolution to test surround modulation. For further analysis, we composed a baseline dataset, hereby called baseline-Image Net, by randomly choosing 100 categories. From each category, 500 instances were randomly chosen for training, 50 instances for validation, and 100 instances for the test set. All images were cropped around their centers and resized to 160 160 pixels. |
| Dataset Splits | Yes | From each category, 500 instances were randomly chosen for training, 50 instances for validation, and 100 instances for the test set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud instance types used for running the experiments. It only states 'All implementations are done in Tensorflow [1]'. |
| Software Dependencies | No | The paper mentions 'All implementations are done in Tensorflow [1], and during the training procedure, Adam optimizer [28]' but does not provide specific version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | All implementations are done in Tensorflow [1], and during the training procedure, Adam optimizer [28] with a learning rate of 10 4 is used to minimize the cross entropy loss (see Supplementary Materials for more details). We train all of the networks from scratch by initializing trainable weights with Xavier initialization [14], and repeat each experiment 10 times. |