A Sublinear Adversarial Training Algorithm

Authors: Yeqi Gao, Lianke Qin, Zhao Song, Yitan Wang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we analyze the convergence guarantee of adversarial training procedure on a two-layer neural network with shifted Re LU activation, and shows that only o(m) neurons will be activated for each input data per iteration. Furthermore, we develop an algorithm for adversarial training with time cost o(mnd) per iteration by applying half-space reporting data structure.
Researcher Affiliation Collaboration Yeqi Gao Tsinghua Univeristy Beijing, China gaoyq23@mails.tsinghua.edu.cn Lianke Qin UC Santa Barbara Santa Barbara, CA, USA lianke@ucsb.edu Zhao Song Adobe Research Seattle, WA, USA zsong@adobe.com Yitan Wang Yale University New Haven, CT, USA yitan.wang@yale.edu
Pseudocode Yes Algorithm 1 Sublinear adversarial training
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets No The paper mentions 'n input training data in d dimension' and 'Training set S = {(xi, yi)}n i=1'. However, it does not specify a publicly available dataset, nor does it provide any links, DOIs, or citations to a specific dataset source.
Dataset Splits No The paper discusses 'training set S' but does not specify any explicit training/validation/test dataset splits, percentages, or absolute sample counts for different data partitions.
Hardware Specification No The paper does not mention any specific hardware used for computations, such as GPU/CPU models, memory specifications, or cloud resources.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers).
Experiment Setup Yes For all K > 0, ϵ (0, 1), and for every m larger than poly(n, K, 1/ϵ), we set η = Θ(ϵm 1/5) and T = Θ(ϵ 2K2). in Algorithm 1. For every W Rd m with W0 W 2, K m3/5 , Algorithm 1 outputs weights {Wt}T t=1 such that... (Theorem 4.3). Also, Algorithm 1 shows 'Initialization a0, b0, W0.' and Definition 3.1 describes these initializations: 'a0 Rm whose entries are uniformly sampled from { 1 m1/5 , + 1 m1/5 }, W0 Rd m whose entries are i.i.d. sampled from N(0, 1 m), b0 Rm whose entries are i.i.d. sampled from N(0, 1).' These specify hyperparameters and initialization details.