Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System
Authors: Chengxu Zhuang, Jonas Kubilius, Mitra JZ Hartmann, Daniel L. Yamins
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | First, we construct a biophysically-realistic model of the rat whisker array. We then generate a large dataset of whisker sweeps across a wide variety of 3D objects in highly-varying poses, angles, and speeds. Next, we train DNNs from several distinct architectural families to solve a shape recognition task in this dataset. Each architectural family represents a structurally-distinct hypothesis for processing in the whisker-trigeminal system, corresponding to different ways in which spatial and temporal information can be integrated. We find that most networks perform poorly on the challenging shape recognition task, but that specific architectures from several families can achieve reasonable performance levels. Finally, we show that Representational Dissimilarity Matrices (RDMs), a tool for comparing population codes between neural systems, can separate these higher-performing networks with data of a type that could plausibly be collected in a neurophysiological or imaging experiment. Our results are a proof-of-concept that DNN models of the whisker-trigeminal system are potentially within reach. |
| Researcher Affiliation | Academia | Chengxu Zhuang Department of Psychology Stanford University Stanford, CA 94305 chengxuz@stanford.edu Jonas Kubilius Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Brain and Cognition, KU Leuven, Belgium qbilius@mit.edu Mitra Hartmann Departments of Biomedical Engineering and Mechanical Engineering Northwestern University Evanston, IL 60208 hartmann@northwestern.edu Daniel Yamins Departments of Psychology and Computer Science Stanford Neurosciences Institute Stanford University Stanford, CA 94305 yamins@stanford.edu |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. It describes methods textually and uses mathematical equations, but no structured algorithmic format. |
| Open Source Code | Yes | Code for all results, including the whisker model and neural networks, is publicly available at https://github.com/neuroailab/whisker_model. |
| Open Datasets | Yes | The objects used in each sweep are chosen from a subset of the Shape Net [6] dataset, which contains over 50,000 3D objects, each with a distinct geometry, belonging to 55 categories. |
| Dataset Splits | Yes | Because we evaluate networks on held-out validation data, it is not inherently unfair to compare results from networks different numbers of parameters, but for simplicity we generally evaluated models with similar numbers of parameters: exceptions are noted where they occur. |
| Hardware Specification | No | The paper acknowledges "hardware donation from the NVIDIA Corporation" but does not specify any particular GPU models, CPU models, or other hardware details used for running the experiments. |
| Software Dependencies | No | The paper mentions using "the Bullet [33], an open-source real-time physics engine" and "Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms [3]". However, it does not provide specific version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | No | The paper mentions architectural parameters such as "how many layers of each type are in the network, how many units are allocated to each layer, what kernel sizes are used at each layer, and so on." and states that a "list of the specific models and parameters are given in the supplementary materials." However, specific hyperparameter values like learning rates, batch sizes, or optimizer settings are not explicitly detailed in the main text. |