Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms

Authors: Rui-Xiao Zhang, Tianchi Huang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evluate MAFL through trace-driven experiment on two different attacking scenarios.
Researcher Affiliation Collaboration 1The University of Hong Kong 2Sony Group Corporation
Pseudocode Yes Algorithm 1: The workflow of training πm
Open Source Code No The paper does not provide an explicit statement or a link to its open-source code for the described methodology.
Open Datasets Yes We employ two public network datasets: a broadband dataset provided by Puffer project (Yan et al. 2020), and a cellular dataset collected in HSDPA (Riiser et al. 2013). ... We also use a 4k video dataset provided in (Quinlan and Sreenan 2018)
Dataset Splits No The paper describes how the global dataset is split across clients ('using the Dirichlet distribution with hyperparameter 0.9'), but it does not specify explicit training, validation, and test splits for model development or evaluation within these datasets (e.g., 80/10/10 percentages).
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as CPU/GPU models or memory specifications.
Software Dependencies No The paper mentions implementing an FL framework in 'Python' and using 'PPO' and a 'DNN-based state classifier', but it does not provide specific version numbers for Python or any libraries/frameworks (e.g., PyTorch, TensorFlow, etc.).
Experiment Setup Yes Model Architecture. We utilize the same structure and state space as recommended in previous work (Mao, Netravali, and Alizadeh 2017), but we use PPO (Schulman et al. 2017)... For the DNN-based state classifier, we use three FC layers (each layer with 64, 64, and 2 respectively), and the activation function in the final layer is softmax . Without additional explanation, all layers use Relu as the activation function. Hyper-parameters. thc (i.e., to identify those important states) is set as thc = 0.25; thd (i.e., to estimate the disagreement between the global model and target model) is set as thd = 8. The λ in Eq.(8) is set as λ = 0.1. We finally set η = 0.05 in Eq.(6) as it performs the best in our experiment.