Robust Regression via Heuristic Hard Thresholding

Authors: Xuchao Zhang, Liang Zhao, Arnold P. Boedihardjo, Chang-Tien Lu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment demonstrates that the effectiveness of our new method is superior to that of existing methods in the recovery of both regression coefficients and uncorrupted sets, with very competitive efficiency. In this section, we report the extensive experimental evaluation carried out to verify the robustness and efficiency of the proposed method.
Researcher Affiliation Collaboration Virginia Tech, Falls Church, VA, USA George Mason University, Fairfax, VA, USA U. S. Army Corps of Engineers, Alexandria, VA, USA
Pseudocode Yes Algorithm 1: RLHH ALGORITHM
Open Source Code Yes Details of both the source code and sample data used in the experiment can be downloaded here1. 1https://github.com/xuczhang/RLHH
Open Datasets No To demonstrate the performance of our proposed method, we carried out comprehensive experiments in synthetic datasets. Specifically, the simulation samples were randomly generated according to the model in Equation (1) for RLSR problem...
Dataset Splits No The paper mentions generating "synthetic datasets" and uses an L2 error and F1-score for evaluation. It does not provide specific percentages or counts for training, validation, or test splits, nor does it refer to standard predefined splits for these synthetic datasets.
Hardware Specification Yes All the experiments were conducted on a 64-bit machine with Intel(R) core(TM) quad-core processor (i7CPU@3.6GHz) and 32.0GB memory.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Algorithm 1 explicitly lists "tolerance ϵ" as an Input, which dictates the convergence criterion: "until rt+1 St+1 rt St 2 < ϵn".