Efficient Nonparametric Smoothness Estimation
Authors: Shashank Singh, Simon S. Du, Barnabas Poczos
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our theoretical results on simulated data. (Section 8). In this section, we use synthetic data to demonstrate effectiveness of our methods. All experiments use 10, 102, . . . , 105 samples for estimation. |
| Researcher Affiliation | Academia | Shashank Singh Carnegie Mellon University sss1@andrew.cmu.edu Simon S. Du Carnegie Mellon University ssdu@cs.cmu.edu Barnabás Póczos Carnegie Mellon University bapoczos@cs.cmu.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks (e.g., labeled 'Algorithm' or 'Pseudocode'). |
| Open Source Code | Yes | MATLAB code for these experiments is available at https://github.com/sss1/Sobolev Estimation. |
| Open Datasets | No | The paper uses 'synthetic data' and 'simulated data' generated from specified distributions (e.g., Gaussians, Uniform) rather than publicly available datasets with specific access information or citations. |
| Dataset Splits | No | The paper mentions using varying numbers of samples for estimation and splitting samples for some calculations, but it does not provide specific train/validation/test dataset splits (percentages, counts, or predefined splits) for reproducibility. |
| Hardware Specification | No | The paper discusses computational efficiency but does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'MATLAB code' but does not provide specific version numbers for MATLAB or any other software dependencies, libraries, or solvers used in the experiments. |
| Experiment Setup | No | The paper describes the theoretical construction and uses synthetic data, mentioning the number of samples (n) and a smoothing parameter (Zn). However, it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs), optimizer settings, or other system-level training configurations. |