PyGlove: Symbolic Programming for Automated Machine Learning

Authors: Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanxiao Liu, Adam Kraft, Chen Liang, Quoc Le

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through case studies on Image Net and NAS-Bench-101, we show that with Py Glove users can easily convert a static program into a search space, quickly iterate on the search spaces and search algorithms, and craft complex search flows to achieve better results.
Researcher Affiliation Industry Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu Hanxiao Liu, Gabriel Bender, Adam Kraft, Chen Liang, Quoc V. Le Google Research, Brain Team {daiyip, ereal, tanmingxing, yifenglu, hanxiaol, gbender, adamkraft, crazydonkey, qvl}@google.com xuanyi.dxy@gmail.com
Pseudocode No The paper includes code snippets and diagrams of symbolic trees (e.g., Figure 2, Figure 7) but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The source code of MNIST is included in Appendix B.5. ... Source codes are included in Appendix C.2.
Open Datasets Yes For example, on the Image Net dataset [5]... ... We use Mobile Net V2 [41] as an example to demonstrate how to explore new search spaces and search algorithms. For a fair comparison, we first retrain the Mobile Net V2 model on Image Net to obtain a baseline. ... We have reproduced popular NAS papers, including NAS-Bench-101 [38]...
Dataset Splits Yes With our training setup, it achieves a validation accuracy of 73.1% (Table 3, row 1) compared with 72.0% in the original Mobile Net V2 paper. ... The test accuracies and MAdds are based on 3 runs.
Hardware Specification Yes The unit cost for search and training is defined as the TPU hours to train a Mobile Net V2 model on Image Net for 360 epochs.
Software Dependencies No The paper mentions software frameworks like PyTorch [36] and TensorFlow [37] but does not provide specific version numbers for them or any other libraries required for replication.
Experiment Setup Yes Details about our experiment setup, search space definitions, and the code for creating search spaces can be found in Appendix C.1. ... The unit cost for search and training is defined as the TPU hours to train a Mobile Net V2 model on Image Net for 360 epochs. ... To make model sizes comparable, we constrain the search to 300M multiply-adds3 using Tu NAS s absolute reward function [15]. ... We used Regularized Evolution [16] for all these searches, each with 500 runs.