Byzantine-tolerant federated Gaussian process regression for streaming data

Authors: Xu Zhang, Zhenyuan Yuan, Minghui Zhu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a synthetic dataset and two real-world datasets are conducted to evaluate the proposed algorithm.
Researcher Affiliation Academia Xu Zhang Pennsylvania State University xxz313@psu.edu Zhenyuan Yuan Pennsylvania State University zqy5086@psu.edu Minghui Zhu Pennsylvania State University muz16@psu.edu
Pseudocode Yes Algorithm 1 Byzantine-tolerant federated GPR, Algorithm 2 Agent-based local GPR: l GPR(D[i](t)), Algorithm 3 Cloud-based aggregated GPR: c GPR(ˇµ Z |D[i](t), ˇσ 2 Z |D[i](t)), Algorithm 4 Agent-based fused GPR: f GPR(ˇµ Z |D[i](t), ˇσ 2 Z |D[i](t), ˆµZ |D(t), ˆσ2 Z |D(t))
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We provide the simulation details in Section 4. We also include the code in the supplementary document.
Open Datasets Yes Experiments on a synthetic dataset and two real-world datasets are conducted to demonstrate that the Byzantine-tolerant GPR algorithm is resilient to Byzantine attacks. The first dataset is collected from a seven degrees-of-freedom SARCOS anthropomorphic robot arm [13]. The second dataset Kin40k [27] is created using a robot arm simulator.
Dataset Splits No The paper mentions 'training points' and 'test points' but does not explicitly describe a separate 'validation' set or its split.
Hardware Specification Yes We conduct the experiments on a computer with Intel i7-6600 CPU, 2.60GHz and 12 GB RAM.
Software Dependencies No The paper does not provide specific software dependencies or version numbers for libraries, frameworks, or specialized packages used in the experiments.
Experiment Setup Yes We generate ns = 10^3, 5 × 10^3, 10^4, 5 × 10^4 training points in [0, 1], respectively, and choose nt = 120 test points randomly in [0, 1]. There are n = 40 agents in a network... We use the following squared-exponential kernel k(z, z') = σ^2_f exp(−1/(2ℓ^2) (z − z')^2)... we let α = 2.5%, 5%, 7.5%, 10%, 12.5%, 15%. We randomly choose the agents in the network to be compromised by same-value attacks [26], and let β = α. Specifically, for each test point z∗, the Byzantine agents only change the local predictive means to 100, that is, ˇµ∗z|D[i](t) = 100 for all t, and send this incorrect prediction to the cloud.