Associative Memory via a Sparse Recovery Model

Authors: Arya Mazumdar, Ankit Singh Rawat

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Sec. 6, we present some experimental results showcasing the performance of the proposed associative memories. Figure 2 presents our simulation results for n = 1000. For recall phase, we employ the Bregman iterative (BI) algorithm with the IST algorithm as a subroutine. We also plot the performance of the primal dual (PD) algorithm based linear programming solution for the recovery problem of interest (cf. (18)).
Researcher Affiliation Academia Arya Mazumdar Department of ECE University of Minnesota Twin Cities arya@umn.edu Ankit Singh Rawat Computer Science Department Carnegie Mellon University asrawat@andrew.cmu.edu
Pseudocode Yes Algorithm 1 Find null-space with low coherence
Open Source Code No The paper does not include an unambiguous statement or a direct link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper states: "we first sample an n r sub-gaussian matrix A with i.i.d entries." and "The message vectors to be stored are then assumed to be spanned by the k columns of the sampled matrix." This indicates that the data used for experiments was synthetically generated, not a publicly available dataset, and no access information is provided.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing data.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing specifications used for running the experiments.
Software Dependencies No The paper mentions the use of 'Bregman iterative (BI) algorithm' and 'primal dual (PD) algorithm' but does not specify any software names with version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4) that would be needed to replicate the experiment.
Experiment Setup Yes Figure 2 presents our simulation results for n = 1000. Furthermore, we consider message sets with two different dimensions which amounts to m = 500 and m = 700. We run 50 iterations of the recovery algorithms for a given set of parameters to obtain the estimates of the probability of failure (of exact recovery of error vector). In Fig. 2a, we focus on the setting with Gaussian basis matrix (for message set) and unit variance zero mean Gaussian noise during the recall phase.