On learning sparse vectors from mixture of responses
Authors: Nikita Polyanskii
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | As the main contribution of the paper, we prove the existence of learning algorithms for the first problem which work without any assumptions. Under a mild structural assumption on the unknown vectors, we also show the existence of learning algorithms for the second problem and rigorously analyze their query complexity. Our main contribution lies in showing the existence of a learning algorithm that works without any assumptions and is resilient to noisy measurements. We emphasize that our contribution is primarily theoretical as the questions raised in our work are purely mathematical. |
| Researcher Affiliation | Collaboration | Nikita Polyanskii IOTA Foundation Berlin, Germany nikitapolyansky@gmail.com. The work was conducted in part when Nikita Polyanskii was with the Technical University of Munich and the Skolkovo Institute of Science and Technology. |
| Pseudocode | Yes | Algorithm 1: Support recovery algorithm |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not involve empirical experiments with datasets, thus no information on publicly available training data is provided. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with datasets, thus no information on training/validation/test splits is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe experiments run on specific hardware. No hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe experiments that would require specific software versions or dependencies. No such details are mentioned. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs, not empirical experimental setups. Therefore, no details on hyperparameters or training configurations are provided. |