Development of JavaScript-based deep learning platform and application to distributed training
Authors: Masatoshi Hidaka, Ken Miura, Tatsuya Harada
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiments, we demonstrate their practicality by training VGGNet in a distributed manner using web browsers as the client. |
| Researcher Affiliation | Academia | Masatoshi Hidaka, Ken Miura & Tatsuya Harada Department of Information Science and Technology The University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan {hidaka,miura,harada}@mi.t.u-tokyo.ac.jp |
| Pseudocode | Yes | Figure 1: Example of forward calculation of fully-connected layer using Sushi2 |
| Open Source Code | Yes | The source code is provided as open-source software1. Download code from https://github.com/mil-tokyo |
| Open Datasets | Yes | We evaluated them by training LeNet with MNIST dataset (Le Cun et al., 1998b). |
| Dataset Splits | No | While the paper mentions using the MNIST dataset and shows 'mnist train' and 'mnist test' in a configuration file example (Figure 2), it does not explicitly provide details about a specific validation dataset split. |
| Hardware Specification | Yes | Table 2: Hardware used for the experiments. GPU: AMD FirePro S9170, NVIDIA K80. CPU: Intel Core i7-5930K, Intel Xeon E5-2690 v3. |
| Software Dependencies | Yes | Firefox (version 32) and node.js (version 4.3.0) are used as the JavaScript execution environment. |
| Experiment Setup | Yes | The network structure is based on Le Cun et al. (1998a), which contains two convolutional layers and two fully-connected layers. The batch size is 64. ... The optimization method is momentum SGD. ... The batch size is 256 according to (Simonyan & Zisserman, 2014a). |