Structured and Deep Similarity Matching via Structured and Deep Hebbian Networks
Authors: Dina Obeid, Hugo Ramambason, Cengiz Pehlevan
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulations show that our networks learn meaningful features. 6 Simulations Next, we illustrate the performance of the structured and deep similarity matching networks in various datasets. |
| Researcher Affiliation | Academia | Dina Obeid Hugo Ramambason Cengiz Pehlevan John A. Paulson School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA {dinaobeid@seas,hugo_ramambason@g,cpehlevan@seas}.harvard.edu |
| Pseudocode | No | The paper describes algorithms and equations (e.g., equations 1, 2, 17, 19) but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any statement or link to open-source code for the methodology described. |
| Open Datasets | Yes | We trained a 3-layer, locally connected Hebbian/anti-Hebbian neural network with examples from the labeled faces in the wild" dataset [33]. We trained a single-layer structured similarity matching network on the MNIST data set. |
| Dataset Splits | No | The paper does not explicitly mention training, validation, or test dataset splits with specific percentages or counts. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | Classification was done using scikitlearn library s Linear SVC with default parameters. No specific version numbers for software dependencies are provided. |
| Experiment Setup | Yes | Neural activation functions were f(a) = max(min(a, 1), 0). We used a regularized version of the similarity matching cost [31] to enforce pattern decorrelation in the first and second layers. We trained with different γ values, shown are features for γ = 0.01. Network had a stride 2 and each neuron received input from a patch of radius ro = 4. Neurons belonging to the same site had inhibitory recurrent connections. We used hyperbolic tangent activation function (tanh(x)). |