Does Data Augmentation Lead to Positive Margin?

Authors: Shashank Rajput, Zhili Feng, Zachary Charles, Po-Ling Loh, Dimitris Papailiopoulos

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we analyze the robustness that DA begets by quantifying the margin that DA enforces on empirical risk minimizers. We present lower bounds on the number of augmented data points required for non-zero margin, and show that commonly used DA techniques may only introduce significant margin after adding exponentially many points to the data set.
Researcher Affiliation Academia 1Department of Computer Science, University of Wisconsin-Madison 2Department of Electrical and Computer Engineering, University of Wisconsin-Madison 3Department of Statistics, University of Wisconsin-Madison.
Pseudocode No The paper presents theoretical analysis and proofs, but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statements about releasing source code or provide links to a code repository.
Open Datasets No The paper is theoretical and does not conduct experiments on datasets, thus no specific dataset access information is provided for training or evaluation.
Dataset Splits No The paper is theoretical and does not describe any experiments that would involve dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not describe any experimental setup or specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details, hyperparameters, or system-level training settings.