Optimal Transport Kernels for Sequential and Parallel Neural Architecture Search
Authors: Vu Nguyen, Tam Le, Makoto Yamada, Michael A. Osborne
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that our TW-based approaches outperform other baselines in both sequential and parallel NAS. |
| Researcher Affiliation | Collaboration | 1Amazon Adelaide (work done prior to joining Amazon) 2RIKEN AIP 3Kyoto University 4University of Oxford. |
| Pseudocode | Yes | Algorithm 1 Sequential and Parallel NAS using Gaussian process with tree-Wasserstein kernel |
| Open Source Code | Yes | We release the Python code for our experiments at https://github.com/ntienvu/TW_NAS. |
| Open Datasets | Yes | We utilize the popular NAS tabular datasets of Nasbench101 (NB101) (Ying et al., 2019) and Nasbench201 (NB201) (Dong & Yang, 2020) for evaluations. |
| Dataset Splits | No | The paper mentions allocating queries for NAS benchmarks but does not specify the train/validation/test splits of the underlying classification datasets (e.g., CIFAR-10) that these benchmarks utilize. |
| Hardware Specification | No | The paper acknowledges 'NVIDIA for sponsoring GPU hardware and Google Cloud Platform for sponsoring computing resources'. However, it does not specify exact GPU models, CPU models, or other detailed hardware specifications. |
| Software Dependencies | No | The paper mentions using the 'POT library (Flamary & Courty, 2017)' and the 'Auto ML library for TPE and BOHB3'. However, specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | All experimental results are averaged over 30 independent runs with different random seeds. We set the number of candidate architecture |Pt| = 100. We allocate a maximum budget of 500 queries for NB101 and 200 queries for NB201 including 10% of random selection at the beginning of BO. |