ApproxIFER: A Model-Agnostic Approach to Resilient and Robust Prediction Serving Systems

Authors: Mahdi Soleymani, Ramy E. Ali, Hessam Mahdavifar, A. Salman Avestimehr8342-8350

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments on a large number of datasets and model architectures show significant degraded mode accuracy improvement by up to 58% over Par M.
Researcher Affiliation Academia 1 University of Michigan Ann Arbor 2 University of Southern California (USC)
Pseudocode Yes Algorithm 1: Error-locator algorithm. Input: xi s, yi s for i Aavl, E and K. Output: Error locations. Step 1: Find polynomials P(x) def = K+E 1 P i=0 P ixi and Q(x) def = K+E 1 P i=0 Qixi by solving the following system of linear equations P(xi) = yi Q(xi), i Aavl.
Open Source Code No The paper provides links to the code for a baseline method (Par M) and to pretrained models, but not to the source code for Approx IFER itself. 'The pretrained models are available at https://github.com/huyvnphan/Py Torch_CIFAR10.' and 'The results of Par M are obtained using the codes available at https://github.com/thesys-lab/parity-models.'
Open Datasets Yes We run experiments on MNIST (Le Cun et al. 1998), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), CIFAR (Krizhevsky, Hinton et al. 2009) and Image Net (Deng et al. 2009) datasets.
Dataset Splits No The paper mentions using 'the test dataset' for evaluation but does not specify explicit training or validation splits, nor does it provide percentages or sample counts for any data partitioning.
Hardware Specification Yes The latency evaluation experiments are written with MPI4py (Dalcin et al. 2011) and performed on Amazon AWS c5.xlarge instances.
Software Dependencies No The paper mentions software like 'Py Torch (Paszke et al. 2019)' and 'MPI4py (Dalcin et al. 2011)' but does not provide specific version numbers for these software dependencies.
Experiment Setup No The paper does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed training configurations in the main text for experiment reproduction.