ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization

Authors: Xunpeng Huang, Runxin Xu, Hao Zhou, Zhe Wang, Zhengyang Liu, Lei Li7857-7864

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a variety of CV and NLP tasks demonstrate that ACMo has a comparable convergence to state-of-the-art Adam-type optimizers, and even a better generalization performance in most cases.
Researcher Affiliation Collaboration 1 Bytedance AI Lab, Shanghai, China 2 Peking University, Beijing, China 3 Ohio State University, Columbus , Ohio , United States 4 Beijing Institute of Technology, Beijing, China {huangxunpeng, zhouhao.nlp, lileilab}@bytedance.com, runxinxu@gmail.com, wang.10982@osu.edu, zhengyang@bit.edu.cn
Pseudocode Yes Algorithm 1 Angle-Calibrated Moment method.
Open Source Code Yes The code is available at https://github.com/Xunpeng746/ACMo.
Open Datasets Yes We used two datasets CIFAR-10, CIFAR-100 (Krizhevsky, Hinton et al. 2009), and tested three different CNN architectures including VGGNet (Simonyan and Zisserman 2015), Res Net (He et al. 2016) and Dense Net (Huang et al. 2017). ... We perform experiments on WMT 14 EN-DE dataset with Transformer (Vaswani et al. 2017).
Dataset Splits No The paper mentions using CIFAR-10, CIFAR-100, and WMT 14 EN-DE datasets, which have standard splits, and that cross-validation was performed. However, it does not explicitly state the specific dataset split percentages or sample counts for training, validation, or test sets.
Hardware Specification Yes we utilize 4 Tesla-V100-PCIE(16GB) GPUs to train the Tansformer base, where we set batch token size as 4096 per GPU in the training process.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries.
Experiment Setup Yes For all optimizers in our experiments, we choose the best initial set of step size from {1e 1, 5e 2, 1e 2, 5e 3, . . . , 5e 5}. ... We ran 200 epochs, and set the learning rate to decay by 0.1 every 50 epochs. ... we set batch token size as 4096 per GPU in the training process. ... We directly apply the default hyperparameters, i.e., βt = 0.9, Ψ(ˆβt, ˆβt 1) = ˆβt, for all our experiments.