Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search
Authors: Kun Jing, Jungang Xu, Pengfei Li
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our GMAEenhanced predictor with existing predictors in different search spaces, and experimental results show that our predictor has high query utilization. Moreover, GMAE-enhanced predictor with different search strategies can discover competitive architectures in different search spaces. In order to verify the performance on the actual NAS task, we use 100 evaluated architecture samples on CIFAR-10 and train our predictor for NAS. The discovered architecture can achieve competitive results on CIFAR-10. We first verify the effectiveness of the GMAE-enhanced predictor in different search spaces in Section 4.1. Furthermore, in Section 4.2, we demonstrate that the query utilization of our search method surpasses other predictor-based methods. In addition to the experiments on two different NAS benchmarks, we also use our search method on the actual NAS task. Finally, in Section 4.3, we conduct some ablation studies. |
| Researcher Affiliation | Academia | Kun Jing , Jungang Xu and Pengfei Li School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | Yes | The algorithm descriptions are shown in Algorithm 1. |
| Open Source Code | Yes | Code and supplementary materials are available at https://github.com/kunjing96/GMAENAS.git. |
| Open Datasets | Yes | In order to verify the performance on the actual NAS task, we use 100 evaluated architecture samples on CIFAR-10 and train our predictor for NAS. The NAS-Bench-101 search space [Ying et al., 2019] consists of 423K unique convolutional architectures. The NAS-Bench-101 provides their validation and test accuracies on CIFAR-10. The NAS-Bench-301 search space [Liu et al., 2019; Siems et al., 2020] is the larger cell-based space containing approximately 1018 architectures and their performances on CIFAR-10. |
| Dataset Splits | Yes | The NAS-Bench-101 provides their validation and test accuracies on CIFAR-10. The average validation error of the last 5 epochs is computed as the performance label. |
| Hardware Specification | Yes | The search cost unit is GPU-days on a Tesla V100. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Our pre-training and fine-tuning setups are reported in the supplementary materials. The choice of masking ratios is sensitive to search space in our experiments. But the masking ratio of 75% is the best option for NAS-Bench-101 space. But the masking ratio of 5% is the best for NAS-Bench-301 space. We use BPR loss for fine-tuning our predictor. |