Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Average-case hardness of RIP certification
Authors: Tengyao Wang, Quentin Berthet, Yaniv Plan
NeurIPS 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main result is that certification in this sense is hard even in a near-optimal regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs. |
| Researcher Affiliation | Academia | Tengyao Wang Centre for Mathematical Sciences Cambridge, CB3 0WB, United Kingdom EMAIL Quentin Berthet Centre for Mathematical Sciences Cambridge, CB3 0WB, United Kingdom EMAIL Yaniv Plan 1986 Mathematics Road Vancouver BC V6T 1Z2, Canada EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not mention providing open-source code for the methodology or results described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on datasets, thus it does not mention public or open datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments with dataset splits. Therefore, it does not provide information about training/test/validation splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup or the specific hardware used to run experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies or version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or system-level training settings. |