Fitting the Search Space of Weight-sharing NAS with Graph Convolutional Networks

Authors: Xin Chen, Lingxi Xie, Jun Wu, Longhui Wei, Yuhui Xu, Qi Tian7064-7072

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on the search space defined by Fair NAS (Chu et al. 2019b) which has 19 cells and 6 choices for each cell. ... All experiments are conducted on ILSVRC2012 (Russakovsky et al. 2015).
Researcher Affiliation Collaboration Xin Chen1, Lingxi Xie1, Jun Wu2, Longhui Wei1, Yuhui Xu3, Qi Tian 1 1 Huawei Cloud & AI 2 Fudan University 3 Shanghai Jiao Tong University
Pseudocode Yes Algorithm 1: Applying GCN for Weight-sharing NAS
Open Source Code No The paper does not provide a specific link or explicit statement about the release of its source code.
Open Datasets Yes All experiments are conducted on ILSVRC2012 (Russakovsky et al. 2015).
Dataset Splits Yes On the Image Net dataset with 1,000 classes, we randomly sample 100 classes and use the corresponding subset to optimize the super-network. 90% of the training data is used to update the model parameters, and the remaining 10% is used for evaluating each of the sampled sub-networks. ... Evaluating each sub-network on the validation subset (around 13K images) takes an average of 4.48 seconds on an NVIDIA Tesla-V100 GPU.
Hardware Specification Yes Evaluating each sub-network on the validation subset (around 13K images) takes an average of 4.48 seconds on an NVIDIA Tesla-V100 GPU.
Software Dependencies No The paper mentions using a 'graph convolutional network (GCN) (Kipf and Welling 2017)' and describes the training procedure, but it does not specify software dependencies with version numbers (e.g., Python version, specific deep learning framework versions).
Experiment Setup Yes 150 epochs is often sufficient for super-network training, which takes around 13 hours on eight GPUs. ... We have tested two different settings of M = 2,000 and M = 5,000.