Coded Sequential Matrix Multiplication For Straggler Mitigation
Authors: Nikhil Krishnan Muralee Krishnan, Seyederfan Hosseini, Ashish Khisti
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These are further validated by experiments, where we implement our schemes to train neural networks. |
| Researcher Affiliation | Academia | M. Nikhil Krishnan University of Toronto nikhilkrishnan.m@gmail.com Erfan Hosseini University of Toronto ehosseini2108@gmail.com Ashish Khisti University of Toronto akhisti@ece.utoronto.ca |
| Pseudocode | Yes | Algorithm 1: Algorithm used by master to assign mini-tasks in the DIP coding scheme |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of its methodology's code. |
| Open Datasets | Yes | Training is performed for 250 iterations over MNIST dataset with a batch size of 1024 using SGD. |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly provide details about validation dataset splits. |
| Hardware Specification | Yes | We use four virtual machines with 8GB of RAM and 4 v CPUs as workers and one more machine with 16GB of RAM and 8 v CPUs as the master. |
| Software Dependencies | No | The paper mentions 'mpi4py [19] and Num Py' but does not specify their version numbers. |
| Experiment Setup | Yes | Each NN model is fully connected with two hidden layers of size 3000 followed by a Re LU activation. Training is performed for 250 iterations over MNIST dataset with a batch size of 1024 using SGD. |