An Efficient Pruning Algorithm for Robust Isotonic Regression
Authors: Cong Han Lim
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that this algorithm can perform much faster than the dynamic programming approach on robust estimators, especially as the desired accuracy increases. |
| Researcher Affiliation | Academia | Cong Han Lim School of Industrial Systems and Engineering Georgia Tech Altanta, GA 30332 clim31@gatech.edu Work done while at Wisconsin Institute for Discovery, University of Wisconsin-Madison. |
| Pseudocode | Yes | Algorithm 1 Dynamic Program for fixed grid Gk; Algorithm 2 Algorithmic Framework for Faster Robust Isotonic Regression; Algorithm 3 Pruning I via Lower/Upper Bounds; Algorithm 4 Pruning Subroutine; Algorithm 5 Main Algorithm for Refining via Linear Underestimators; Algorithm 6 A Pruning Algorithm for Robust Isotonic Regression |
| Open Source Code | No | The paper does not provide any explicit statements about making the source code available or provide any links to a code repository. |
| Open Datasets | No | The paper describes generating synthetic data: 'We generate a series of n points y1, . . . , yn from 0.2 to 0.8 equally spaced out and added Gaussian random noise with standard deviation of 0.03. We then randomly flipped between 5% to 50% of the points around 0.5, and these points act as the outliers.' It does not provide access information for a publicly available or open dataset. |
| Dataset Splits | No | The paper describes how the data was generated and used in experiments but does not specify any training, validation, or test dataset splits. |
| Hardware Specification | Yes | The algorithms were implemented in Python 3.6.7, and the experiments were ran on an Intel 7th generation core i5-7200U dual-core processor with 8GB of RAM. |
| Software Dependencies | No | The paper states: 'The algorithms were implemented in Python 3.6.7'. It does not list specific libraries or specialized solvers with version numbers. |
| Experiment Setup | Yes | We set n to 1000 and varied k from 27 = 128 to 216 = 65536. We then randomly flipped between 5% to 50% of the points around 0.5, and these points act as the outliers. The results are averaged over 10 independent trials. |