Generic Neural Architecture Search via Regression

Authors: Yuhong Li, Cong Hao, Pan Li, Jinjun Xiong, Deming Chen

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across 13 CNN search spaces and one NLP space demonstrate the remarkable efficiency of Gen NAS using regression, in terms of both evaluating the neural architectures (quantified by the ranking correlation Spearman s ρ between the approximated performances and the downstream task performances) and the convergence speed for training (within a few seconds).
Researcher Affiliation Academia University of Illinois at Urbana-Champaign1, Georgia Institute of Technology2, Purdue University3, University at Buffalo 4
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Our code has been made available at: https://github.com/leeyeehoo/Gen NAS
Open Datasets Yes We consider 13 CNN NAS search spaces including NASBench-101 [30], NASBench-201 [35], Network Design Spaces (NDS) [56], and one NLP search space, NASBench NLP [29]. All the training is conducted using only one batch of data with batch size 16 for 100 iterations. Details of NAS search spaces and experiment settings are in the supplemental material.
Dataset Splits No The paper mentions 'during validation, we adjust each stage s output importance...' but does not explicitly provide the training/validation/test dataset splits (e.g., percentages or sample counts) needed to reproduce the experiment. It refers to 'Details of NAS search spaces and experiment settings are in the supplemental material' for more information on the datasets.
Hardware Specification Yes Proxy task search is done on a single GPU GTX 1080Ti.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or TensorFlow versions) needed to replicate the experiment.
Experiment Setup Yes All the training is conducted using only one batch of data with batch size 16 for 100 iterations. Details of NAS search spaces and experiment settings are in the supplemental material.