Average-case hardness of RIP certification
Authors: Tengyao Wang, Quentin Berthet, Yaniv Plan
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main result is that certiļ¬cation in this sense is hard even in a near-optimal regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs. |
| Researcher Affiliation | Academia | Tengyao Wang Centre for Mathematical Sciences Cambridge, CB3 0WB, United Kingdom t.wang@statslab.cam.ac.uk Quentin Berthet Centre for Mathematical Sciences Cambridge, CB3 0WB, United Kingdom q.berthet@statslab.cam.ac.uk Yaniv Plan 1986 Mathematics Road Vancouver BC V6T 1Z2, Canada yaniv@math.ubc.ca |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not mention providing open-source code for the methodology or results described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on datasets, thus it does not mention public or open datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments with dataset splits. Therefore, it does not provide information about training/test/validation splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup or the specific hardware used to run experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies or version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or system-level training settings. |