Fair and Efficient Contribution Valuation for Vertical Federated Learning

Authors: Zhenan Fan, Huang Fang, Xinglu Wang, Zirui Zhou, Jian Pei, Michael Friedlander, Yong Zhang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both theoretical analysis and extensive experimental results demonstrate the fairness, efficiency, adaptability, and effectiveness of Ver Fed SV. ... 7 EXPERIMENTS We conduct extensive experiments on real-world datasets, including Adult (Zeng et al., 2008), Web (Platt, 1998), Covtype (Blackard & Dean, 1999), and RCV1 (Lewis et al., 2004).
Researcher Affiliation Collaboration University of British Columbia, {zhenanf,hgfang, mpf}@cs.ubc.ca Simon Fraser University, {xinglu wang 2,jpei}@cs.sfu.ca Huawei Technologies Canada, {zirui.zhou, yong.zhang3}@huawei.com
Pseudocode No The paper describes algorithms (e.g., Fed SGD in Section A.1, VAFL in Section A.2) in prose, but does not present them as structured pseudocode or algorithm blocks with formal labels.
Open Source Code Yes Our code is submitted in the supplementary material.
Open Datasets Yes We use four real-world data sets, Adult (Zeng et al., 2008), Web (Platt, 1998), Covtype (Blackard & Dean, 1999), and RCV1 (Lewis et al., 2004) 1. ... 1From the website of LIBSVM (Chang & Lin, 2011) https://www.csie.ntu.edu.tw/ cjlin/ libsvmtools/datasets/
Dataset Splits No The paper mentions "training", "validation", and "test" but does not specify how the datasets are split into these sets (e.g., percentages, sample counts, or specific methods for partitioning).
Hardware Specification Yes All the experiments are conducted on a Linux server with 32 CPUs and 64 GB memory.
Software Dependencies No The paper mentions the software used: "We implement the VFL algorithms and the corresponding Ver Fed SV computation schemes described in Sections 5 and 6 in the Julia language (Bezanson et al., 2017). ... The matrix completion problem in Equation 2 is solved by the Julia package Low Rank Models.jl (Udell et al., 2016)." However, it does not specify version numbers for Julia or the Low Rank Models.jl package.
Experiment Setup Yes Hyperparameter setting (synchronous) We summarize the hyperparameters under the synchronous setting in Table 3, where η and τ are the learning rate and the batch size used in the Fed SGD algorithm, and r and λ are the rank parameter and regularization parameter used in the matrix completion problem. ... Hyperparameter setting (asynchronous) We summarize the hyperparameters under the asynchronous setting in Table 4, where η and τ are the learning rate and the batch size used in the VAFL algorithm, and t is the communication frequency for clients.