Differentially Private Correlation Alignment for Domain Adaptation
Authors: Kaizhong Jin, Xiang Cheng, Jiaxi Yang, Kaiyuan Shen
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on standard benchmark datasets confirm the effectiveness of our approach. |
| Researcher Affiliation | Academia | State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate our approach on two popular domain adaptation benchmark datasets. The first one is Office-Caltech10 dataset [Gong et al., 2012]... The second one is Amazon review dataset [Blitzer et al., 2006]... |
| Dataset Splits | No | The paper defines domain adaptation tasks (e.g., A D (train on A, test on D)) but does not specify train/validation/test splits within these domains (e.g., percentages or sample counts for each subset). |
| Hardware Specification | No | The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | There are 5 prime parameters in PRIMA. Among them, ϵ, δ, σ are privacy parameters, batch size b, clipping bound c are model training parameters. We follow the experimental protocol used in [Abadi et al., 2016] by setting σ = 4, δ = 10 5, and compute the value of ϵ as a function of the training epochs E. We follow the experimental protocol of [Abadi et al., 2016] again by setting c as the median of the unclipped gradients over the course of training. Empirically, batch size b is set to 25. |