Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Associative Memory Using Dictionary Learning and Expander Decoding

Authors: Arya Mazumdar, Ankit Singh Rawat

AAAI 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Though our main contribution is theoretical, in this section we evaluate the proposed associative memory on synthetic dataset to verify if our methods works. Only a representative figure is presented here (Fig. 2).
Researcher Affiliation Academia Arya Mazumdar College of Information & Computer Science University of Massachusetts Amherst EMAIL Ankit Singh Rawat Research Laboratory of Electronics Massachusetts Institute of Technology EMAIL
Pseudocode Yes Figure 1: Recovery algorithm for sparse vector from expander graphs based measurement matrix (Jafarpour et al. 2009).
Open Source Code No The paper does not contain any statement about making the source code available or provide a link to a code repository.
Open Datasets No The paper uses synthetic datasets which are generated for the experiments, but does not provide access information (e.g., link, DOI, or citation for a public dataset).
Dataset Splits No The paper does not specify explicit training, validation, or test dataset splits. The data is synthetically generated for each run, with details provided for the generation process.
Hardware Specification No The paper does not explicitly describe any specific hardware used to run the experiments.
Software Dependencies No The paper does not provide specific software names with version numbers that would be necessary to replicate the experiments.
Experiment Setup Yes We consider three sets of system parameters (m, n, d) for the dataset to be stored. For each set of parameters, we first generate an m n random matrix B according to the sparse-sub-Gaussian model (cf. Sec. 2.1). Each non-zero entry of the matrix B is drawn uniformly at random from the set { 1, 2, 3}. ... For a fixed number E of errors, we generate 100 error vectors e Rn with the number of non-zero entries in each error vector equal to E. The non-zero entries in these vectors are uniformly generated from the set { 1, . . . , 4}.