Byzantine Resilient Distributed Multi-Task Learning

Authors: Jiani Li, Waseem Abbas, Xenofon Koutsoukos

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct three experiments for both regression and classification problems and demonstrate that our approach yields good empirical performance for non-convex models, such as convolutional neural networks.
Researcher Affiliation Academia Jiani Li, Waseem Abbas, and Xenofon Koutsoukos Department of Electrical Engineering and Computer Science Vanderbilt University, Nashville, TN, USA {jiani.li, waseem.abbas, xenofon.koutsoukos}@vanderbilt.edu
Pseudocode No The paper describes the steps of the proposed rule in text and mathematical formulas but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code Yes Our code is available at https://github.com/Jiani Li/resilient Distributed MTL.
Open Datasets Yes Human Activity Recognition7: Mobile phone sensor data (accelerometer and gyroscope) is collected from 30 individuals... 7https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+ smartphones. Digit Classification: We consider a network of ten agents performing digit classification. Five of the ten agents have access to the MNIST dataset8 [45] (group 1) and the other five have access to the synthetic dataset9 (group 2)... 8http://yann.lecun.com/exdb/mnist. 9https://www.kaggle.com/prasunroy/synthetic-digits.
Dataset Splits No The paper describes the datasets and their use in experiments but does not explicitly provide training, validation, and test dataset splits with specific percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions general techniques like 'mini-batch gradient descent' and 'SGD' but does not specify software dependencies with version numbers (e.g., specific libraries or frameworks like TensorFlow/PyTorch with their versions).
Experiment Setup Yes At each iteration, Byzantine agents send random values (for each dimension) from the interval [15, 16] for target localization, and [0, 0.1] for the other two case studies.