LISSNAS: Locality-based Iterative Search Space Shrinkage for Neural Architecture Search

Authors: Bhavna Gopal, Arjun Sridhar, Tunhou Zhang, Yiran Chen

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase our method on an array of search spaces spanning various sizes and datasets. We accentuate the effectiveness of our shrunk spaces when used in one-shot search by achieving the best Top-1 accuracy in two different search spaces. Our method achieves a SOTA Top-1 accuracy of 77.6% in Image Net under mobile constraints, bestin-class Kendal-Tau, architectural diversity, and search space size.
Researcher Affiliation Academia Bhavna Gopal1 , Arjun Sridhar1 , Tunhou Zhang1 and Yiran Chen1 1Duke University {bhavna.gopal, arjun.sridhar, tunhou.zhang, yiran.chen}@duke.edu
Pseudocode No The paper describes the algorithm steps and shows a pipeline diagram (Figure 3) but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not explicitly state that source code for the described methodology is available, nor does it provide a link to a code repository.
Open Datasets Yes We investigate locality and our algorithm s performance on NASBench101, Trans NASBench Macro, NASBench301, and Shuffle Net V2. For a more detailed description of these search spaces and respective datasets, please reference the Appendix C. ... Image Net under mobile constraints serves as a canonical benchmark for all NAS techniques.
Dataset Splits No The paper mentions using training and evaluation but does not provide specific details on the split percentages or methodology for validation datasets.
Hardware Specification No The paper mentions '4 GPU days' but does not specify the type or model of GPU, CPU, or any other specific hardware used for the experiments.
Software Dependencies No The paper mentions using 'XGBBoost as our accuracy predictor' but does not provide specific version numbers for XGBoost or any other software dependencies.
Experiment Setup No The paper mentions a 'hyperparameter optimization scheme' but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed training configurations in the main text.