Visual hyperacuity with moving sensor and recurrent neural computations
Authors: Alexander Rivkind, Or Ram, Eldad Assa, Michael Kreiserman, Ehud Ahissar
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Applying our system to Ci FAR-10 and Ci FAR-100 datasets down-sampled via 8x8 sensor, we found that (i) classification accuracy, which is drastically reduced by this down-sampling, is mostly restored to its 32x32 baseline level when using a moving sensor and recurrent connectivity, (ii) in this setting, neurons in the early layers exhibit a wide repertoire of selectivity patterns, spanning the spatio-temporal selectivity space, with neurons preferring different combinations of spatial and temporal patterning, and (iii) curved sensor s trajectories improve visual acuity compared to straight trajectories, echoing recent experimental findings involving eye-tracking in challenging conditions. |
| Researcher Affiliation | Academia | Dept. of Brain Sciences, Weizmann Institute , Rehovot, Israel |
| Pseudocode | No | The paper describes network architectures and procedures but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1The code can be found at https://github.com/orram/Dynamical Recurrent Classifier |
| Open Datasets | Yes | To create a synthetic setting reminiscent of ocular drift, we used images from popular Ci FAR datasets (Krizhevsky et al., 2009) |
| Dataset Splits | No | The paper mentions 'Test-set accuracy' (Table 1) but does not provide specific details on training, validation, and test dataset splits (e.g., percentages, sample counts, or explicit validation set usage). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or specific machine names) used for running the experiments. |
| Software Dependencies | No | Our model was mostly implemented in the Keras package (Chollet, 2015), with the convolutional GRU layer adapted from the project of (Van Valen et al., 2016). A standard Open CV (Bradski, 2000) function... While software is mentioned, specific version numbers for these dependencies are not provided. |
| Experiment Setup | No | The paper describes the general training procedure and model architecture (e.g., feature learning paradigm, loss functions, network layers in Table S3), but does not explicitly provide specific hyperparameter values such as learning rate, batch size, or number of epochs in the main text. |