Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model

Authors: Pranjal Awasthi, Abhimanyu Das, Weihao Kong, Rajat Sen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our key theoretical contribution in this work is a general analysis of the trimmed MLE estimator. In particular, we show that for a broad family of GLMs, and under adversarial corruptions of only the labels, not only does the iterative trimmed MLE estimator enjoy theoretical guarantees, it in fact nearly achieves the minimax error rate!
Researcher Affiliation Industry Pranjal Awasthi Google Research pranjalawasthi@google.com Abhimanyu Das Google Research abhidas@google.com Weihao Kong Google Research weihaokong@google.com Rajat Sen Google Research senrajat@google.com
Pseudocode Yes Algorithm 1: Alternating minimization of trimmed maximum likelihood estimator
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
Open Datasets No The paper is theoretical and does not mention specific datasets (like MNIST or CIFAR-10), nor does it provide any links, DOIs, or citations for specific data used for training or empirical evaluation. The ethics checklist confirms N/A for experiments and data.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments, hence it does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper states '[N/A]' for the question 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?', indicating no hardware specifications were provided for running experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers (e.g., libraries, frameworks, or solvers) required to reproduce experimental results.
Experiment Setup No The paper focuses on theoretical analysis and algorithm design. It explicitly states '[N/A]' for 'training details (e.g., data splits, hyperparameters, how they were chosen)', indicating no specific experimental setup details were provided.