Privacy-Preserving Stacking with Application to Cross-organizational Diabetes Prediction

Authors: Quanming Yao, Xiawei Guo, James Kwok, Weiwei Tu, Yuqiang Chen, Wenyuan Dai, Qiang Yang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we not only demonstrate the effectiveness of our method on two benchmark data sets, i.e., MNIST and NEWS20, but also apply it into a real application of cross-organizational diabetes prediction from RUIJIN data set, where privacy is of a significant concern.
Researcher Affiliation Collaboration 14Paradigm Inc 2Department of Computer Science and Engineering, HKUST
Pseudocode Yes Algorithm 1 PLR: Privacy-preserving logistic regression. Algorithm 2 PST-S: Privacy-preserving stacking with SP. Algorithm 3 PST-F: Privacy-preserving stacking with FP. Algorithm 4 PST-H: Privacy-preserving stacking with HTL.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Experiments are performed on two popular benchmark data sets for evaluating privacy-preserving learning algorithms [Shokri and Shmatikov, 2015; Papernot et al., 2017; Wang et al., 2018]: MNIST [Le Cun et al., 1998] and NEWS20 [Lang, 1995] (Table 1).
Dataset Splits Yes 60% of them are used for training (with 1/3 of this used for validation), and the remaining 20% for testing.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers, required to replicate the experiments.
Experiment Setup Yes We use K = 5 and 50% of the data for Dl and the remaining for Dh. We set ϵsrc = ϵtgt = 1.0. Hyper-parameters are tuned using the validation set.