Theoretical Analysis of Adversarial Learning: A Minimax Approach
Authors: Zhuozhuo Tu, Jingwei Zhang, Dacheng Tao
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we propose a general theoretical method for analyzing the risk bound in the presence of adversaries. Specifically, we try to fit the adversarial learning problem into the minimax framework. We first show that the original adversarial learning problem can be transformed into a minimax statistical learning problem by introducing a transport map between distributions. Then, we prove a new risk bound for this minimax problem in terms of covering numbers under a weak version of Lipschitz condition. |
| Researcher Affiliation | Academia | 1UBTECH Sydney AI Centre, School of Computer Science, The University of Sydney, Australia 2Department of Computer Science and Engineering, HKUST, Hong Kong |
| Pseudocode | No | The paper contains mathematical derivations and proofs but no pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any statements about providing open-source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments involving datasets, thus no information on public availability of training data is provided. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments with data splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental hardware. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings. |