SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search
Authors: Hyeonmin Ha, Ji-Hoon Kim, Semin Park, Byung-Gon Chun
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate SUMNAS with qualitative and quantitative experiments on the CIFAR10 and Image Net datasets. |
| Researcher Affiliation | Collaboration | Hyeonmin Ha Seoul National University Ji-Hoon Kim NAVER AI Lab, NAVER Corporation; NAVER CLOVA, NAVER Corporation Semin Park Yonsei University Byung-Gon Chun Seoul National University; Friendli AI |
| Pseudocode | Yes | Algorithm 1 Meta-feature training |
| Open Source Code | No | No, the paper does not provide a direct link or an explicit statement about the public availability of its source code. |
| Open Datasets | Yes | We evaluate SUMNAS on two search spaces NAS-Bench-201 (Dong & Yang, 2020) on CIFAR10 (Krizhevsky et al., 2009) and Mobile Net blocks on Image Net (Russakovsky et al., 2015). |
| Dataset Splits | Yes | For the experiments of the NAS-Bench-201 search space (Dong & Yang, 2020) and CIFAR-10 (Krizhevsky et al., 2009), we train the supernets on the entire training set of CIFAR-10. On the test set, the hyperparameters are tuned and the reported Kendall tau and accuracies are measured. We also search for the best architecture on the test set. (...) For the experiment of the Mobile Net-based search space and Image Net (Russakovsky et al., 2015), we tune hyperparameters and search the best architecture using a validation set that includes about 50K examples sampled from the training set. |
| Hardware Specification | No | No, the paper does not specify the exact GPU/CPU models, memory, or other specific hardware used for running the experiments. |
| Software Dependencies | No | No, the paper does not provide specific software names with version numbers for reproducibility (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | We describe the hyperparameters we used in Appendix D. |