Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Memorization Capacity of Neural Networks with Conditional Computation

Authors: Erdem Koyuncu

ICLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the fundamental limits of neural conditional computation from the perspective of memorization capacity.
Researcher Affiliation Academia Erdem Koyuncu Department of Electrical and Computer Engineering University of Illinois Chicago EMAIL
Pseudocode Yes Algorithm 1 An example conditional neural network
Open Source Code No The paper does not provide any information about open-source code for the described methodology.
Open Datasets No This is a theoretical paper that defines a dataset conceptually (X = {x1, . . . , xn} Rp) but does not use or provide access information for a publicly available empirical dataset for training.
Dataset Splits No This is a theoretical paper and does not describe empirical experiments involving validation sets.
Hardware Specification No This is a theoretical paper and does not describe any empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No This is a theoretical paper and does not describe any empirical experiments, thus no software dependencies with version numbers are listed.
Experiment Setup No This is a theoretical paper and does not describe any empirical experiments or their setup, including hyperparameters or training settings.