IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance

Authors: Hongyi He, Longjun Liu, Haonan Zhang, Nanning Zheng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on NAS-Bench-201 and DARTS-based search space demonstrate the effectiveness of IS-DARTS.
Researcher Affiliation Academia Hongyi He, Longjun Liu*, Haonan Zhang, Nanning Zheng National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University hongyihe@stu.xjtu.edu.cn, liulongjun@xjtu.edu.cn, haonanzhang@stu.xjtu.edu.cn, nnzheng@xjtu.edu.cn
Pseudocode Yes Algorithm 1: Pipeline of IS-DARTS
Open Source Code Yes The implementation of IS-DARTS is available at https://github.com/HY-HE/IS-DARTS.
Open Datasets Yes On NAS-Bench-201, we achieve the same state-of-the-art results in 4 independent runs with only 2.0 GPU hours. [...] Results on NAS-Bench-201 Search Space: As the most widely used NAS benchmark, NAS-Bench-201(Dong and Yang 2020) provides the performance of 15,625 architectures on three datasets(CIFAR-10, CIFAR-100 and Image Net). [...] Results on DARTS Search Space: Classical DARTS search space(Liu, Simonyan, and Yang 2018) is another important benchmark for evaluating NAS methods. The architecture is searched under CIFAR-10 dataset.
Dataset Splits Yes Third, training of both architecture parameters and network weights requires samples, which forces DARTS to divide the training set into two halves. [...] 200 samples of the validation set are used to calculate a precise and stable IIM, which will be proved in section 4.3. [...] 600 validation samples are used to calculate the IIM.
Hardware Specification Yes All our experiments are implemented with Pytorch(Paszke et al. 2017) and conducted on NVIDIA GTX 3090 GPUs.
Software Dependencies No The paper states 'All our experiments are implemented with Pytorch(Paszke et al. 2017)' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes Specific architectures and hyper-parameters are shown in Appendix A.1(He et al. 2023). [...] We keep the searching and evaluation settings same as those of DARTS in (Dong and Yang 2020), except that the searching epoch is decreased to 30 in order to reduce consumption. The shrink rate r in IS-DARTS is 0.25, and the epoch interval between steps is 2.