Certified Minimax Unlearning with Generalization Rates and Deletion Capacity

Authors: Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of (ϵ, δ)-certified machine unlearning for minimax models. Most of the existing works focus on unlearning from standard statistical learning models that have a single variable and their unlearning steps hinge on the direct Hessian-based conventional Newton update. We develop a new (ϵ, δ)-certified machine unlearning algorithm for minimax models. It proposes a minimax unlearning step consisting of a total Hessian-based complete Newton update and the Gaussian mechanism borrowed from differential privacy. To obtain the unlearning certification, our method injects calibrated Gaussian noises by carefully analyzing the sensitivity of the minimax unlearning step (i.e., the closeness between the minimax unlearning variables and the retraining-from-scratch variables). We derive the generalization rates in terms of population strong and weak primal-dual risk for three different cases of loss functions, i.e., (strongly-)convex-(strongly-)concave losses. We also provide the deletion capacity to guarantee that a desired population risk can be maintained as long as the number of deleted samples does not exceed the derived amount. With training samples n and model dimension d, it yields the order O(n/d1/4), which shows a strict gap over the baseline method of differentially private minimax learning that has O(n/d1/2). In addition, our rates of generalization and deletion capacity match the state-of-the-art results derived previously for standard statistical learning models.
Researcher Affiliation Academia Jiaqi Liu1, Jian Lou1,2, , Zhan Qin1, , Kui Ren1 1Zhejiang University 2ZJU-Hangzhou Global Scientific and Technological Innovation Center
Pseudocode Yes Algorithm 1 Minimax Learning Algorithm (Asc sc) ... Algorithm 2 Certified Minimax Unlearning for Strongly-Convex-Strongly-Concave Loss ( Asc sc) ... Algorithm 3 Efficient Certified Minimax Unlearning ( Aefficient)
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and does not describe experiments with specific, named datasets or provide access information for any dataset.
Dataset Splits No The paper is theoretical and does not discuss empirical data splits (training, validation, test).
Hardware Specification No The paper is theoretical and does not describe conducting experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe an implementation that would require listing specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an empirical experimental setup with specific hyperparameters or training configurations. It discusses theoretical properties of loss functions and regularization parameters but not practical experimental settings.