Bridge the Gap Between Architecture Spaces via A Cross-Domain Predictor
Authors: Yuqiao Liu, Yehui Tang, Zeqiong Lv, Yunhe Wang, Yanan Sun
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we will empirically investigate the effectiveness of the proposed method, including the experiments conducted on Image Net [12] and CIFAR-10 [30], and the ablation studies. |
| Researcher Affiliation | Collaboration | Yuqiao Liu 1, Yehui Tang 2,3, Zeqiong Lv1, Yunhe Wang3, Yanan Sun 1 1College of Computer Science, Sichuan University 2School of Artiļ¬cial Intelligence, Peking University 3Huawei Noah s Ark Lab |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code will be available 3. 3https://github.com/lyq998/CDP (Pytorch) https://gitee.com/mindspore/models/tree/master/research/cv/CDP (Mind Spore) |
| Open Datasets | Yes | Our experiments are conducted with Py Torch [41] and Mind Spore [25]. Architecture spaces for CDP: NAS-Bench-101 and NAS-Bench-201 are chosen as the source spaces and DARTS is the target space. (...) Image Net [12] and CIFAR-10 [30] |
| Dataset Splits | No | The paper mentions 'training labels' and a 'validation dataset' in theoretical discussions (Section 3.4), but does not provide specific dataset split information (percentages, sample counts, or defined standard splits) for the experiments conducted in the paper. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper states 'Our experiments are conducted with Py Torch [41] and Mind Spore [25],' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Search strategy: Following the convention in neural predictor [55], a large set of architectures in the target search space are randomly sampled and their performance is predicted. In addition, according to the previous work on DARTS search space [5], we limit the max number of skip connections to 2. (...) Moreover, we train these shallow architectures for 50 epochs to further save cost. |