Adversarially Robust Hypothesis Transfer Learning
Authors: Yunjuan Wang, Raman Arora
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We begin by examining an adversarial variant of the regularized empirical risk minimization learning rule that we term A-RERM. Assuming a nonnegative smooth loss function with a strongly convex regularizer, we establish a bound on the robust generalization error of the hypothesis returned by A-RERM in terms of the robust empirical loss and the quality of the initialization. If the initialization is good, i.e., there exists a weighted combination of auxiliary hypotheses with a small robust population loss, the bound exhibits a fast rate of O(1/n). Otherwise, we get the standard rate of O(1/ n). |
| Researcher Affiliation | Academia | Yunjuan Wang 1 Raman Arora 1 1Department of Computer Science, Johns Hopkins University, Baltimore, USA. Correspondence to: Yunjuan Wang <ywang509@jhu.edu>. |
| Pseudocode | No | The paper describes algorithms textually but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | This is a theoretical paper and does not mention releasing any source code. |
| Open Datasets | No | The paper is theoretical and does not use or provide access to specific datasets. It refers to a 'training dataset of size n from an underlying distribution D'. |
| Dataset Splits | No | The paper is theoretical and does not involve experiments with dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not mention any hardware specifications as it does not involve computational experiments. |
| Software Dependencies | No | The paper is theoretical and does not describe any software dependencies with specific version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. |