AdaScale SGD: A User-Friendly Algorithm for Distributed Training
Authors: Tyler Johnson, Pulkit Agrawal, Haijie Gu, Carlos Guestrin
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform large-scale empirical evaluations on five training benchmarks. Tasks include image classification, machine translation, object detection, and speech recognition. The results align remarkably well with our theory, as Ada Scale systematically preserves model quality across many batch sizes. This includes training Image Net with batch size 32k and Transformer with 262k max tokens per batch. |
| Researcher Affiliation | Industry | 1Apple, Seattle, WA. Correspondence to: T. Johnson <tbjohns@apple.com>, P. Agrawal <pulkit_agrawal@apple.com>. |
| Pseudocode | Yes | Algorithm 1 Scaled SGD ... Algorithm 2 Ada Scale SGD |
| Open Source Code | No | Not found. The paper does not provide any link or explicit statement about releasing the source code for the described methodology. |
| Open Datasets | Yes | Table 1: Overview of training benchmarks. ... Dataset ... CIFAR-10 ... Image Net ... Libri Speech ... WMT-2014 ... PASCAL VOC |
| Dataset Splits | Yes | We use standard lr parameters for imagenet and yolo. Otherwise, we use tuned parameters that approximately maximize the validation metric (to our knowledge, there are no standard schedules for solving speech and transformer with momentum-SGD). ... Val. Acc (%) ... Val. WAcc (%) ... Val. mAP (%) |
| Hardware Specification | No | Not found. The paper discusses distributed training and parallel processing but does not provide any specific hardware details such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | Not found. The paper discusses models and algorithms (e.g., SGD, Transformer, ResNet, YOLOv3) but does not list any specific software libraries or their version numbers used in the experiments. |
| Experiment Setup | Yes | We use momentum ρ = 0.9 except for transformer, in which case we use ρ = 0.99 for greater training stability. ... Specifically, lr is an exponential decay function for cifar10 and speech, and a step decay function otherwise. We use standard lr parameters for imagenet and yolo. ... LSW trains for T1/S iterations, applying warm-up to the first 5.5% of iterations. ... training Image Net with batch size 32k and Transformer with 262k max tokens per batch. |