Stochastic Gradient Descent-Induced Drift of Representation in a Two-Layer Neural Network
Authors: Farhad Pashakhanloo, Alexei Koulakov
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Motivated by recent experimental findings of stimulus-dependent drift in the piriform cortex, we use theory and simulations to study this phenomenon in a two-layer linear feedforward network. |
| Researcher Affiliation | Academia | 1Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, NY, USA. |
| Pseudocode | No | The paper contains mathematical derivations and theoretical models but no pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper describes generating synthetic data ('stimuli are drawn randomly and independently from a standard n-dimensional Gaussian distribution') but does not refer to a publicly available or open dataset with concrete access information. |
| Dataset Splits | No | The paper discusses numerical simulations and analytical derivations but does not specify dataset splits (e.g., training, validation, test percentages or counts) in the context of machine learning datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | In both plots m = n = 10, p = 20, γ = 0.04, and η = 0.005. (bottom) History of representations for three trial stimuli after 2.2 105 training steps. n = p = 3, γ = 0.1, η = 0.1, and α = 0.5. |