Differentially Private Pairwise Learning Revisited
Authors: Zhiyu Xue, Shaoyang Yang, Mengdi Huai, Di Wang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also conduct extensive experiments on real-world datasets to evaluate the proposed algorithms, experimental results support our theoretical analysis and show the priority of our algorithms. |
| Researcher Affiliation | Academia | Zhiyu Xue1 , Shaoyang Yang2 , Mengdi Huai3 and Di Wang4 1University of Electronic Science and Technology of China 2Harbin Institute of Technology 3University of Virginia 4King Abdullah University of Science and Technology di.wang@kaust.edu.sa |
| Pseudocode | Yes | Algorithm 1 DP Gradient Descent-SC (DPGDSC) ... Algorithm 2 DP Gradient Descent (DPGDC2) ... Algorithm 3 DP Epoch Gradient Descent (DPEGD) |
| Open Source Code | No | The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Datasets. We use two real-world datasets that are widely adopted in pairwise learning tasks. These datasets are the Diabetes dataset and the Diabetic Retinopathy dataset, which have also been used in [Huai et al., 2020]. |
| Dataset Splits | No | The paper mentions "training sample size" and using a "test set" but does not specify a validation set or explicit training/validation/test split percentages or counts for data partitioning. |
| Hardware Specification | No | The paper mentions conducting "extensive experiments" but does not provide any specific details about the hardware used (e.g., GPU models, CPU types, memory). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | Experimental settings. In this paper we studied both of the strongly convex and general convex cases. To conduct experiments for strongly convex case, we add an additional Frobenius norm or ℓ2-norm regularization term with some λ > 0 to the original problem of metric learning and AUC maximization respectively to make the loss be strongly convex. We set λ = 10^-3 for AUC maximization and λ = 10^-2 for metric learning. ... For the KNN classifier, we set K to be 3. |