Towards modular and programmable architecture search
Authors: Renato Negrinho, Matthew Gormley, Geoffrey J. Gordon, Darshan Patil, Nghia Le, Daniel Ferreira
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 7 Experiments. We showcase the modularity and programmability of our language by running experiments that rely on decoupled of search spaces and search algorithms. Table 1: Test results for search space experiments. Table 2: Test results for search algorithm experiments. |
| Researcher Affiliation | Collaboration | Carnegie Mellon University1, TU Wien2, Microsoft Research Montreal3 |
| Pseudocode | Yes | Algorithm 1: Transition. Algorithm 2: Ordered Hyperps. Algorithm 3: Random search. |
| Open Source Code | Yes | We release an implementation of our language with this paper2. 2Visit https://github.com/negrinho/deep_architect for code and documentation. |
| Open Datasets | Yes | We refer to the search spaces we consider as Nasbench [27], Nasnet [28], Flat [15], and Genetic [26]. |
| Dataset Splits | Yes | The test results for the fully trained architecture with the best validation accuracy are reported in Table 1. |
| Hardware Specification | Yes | We thank Google for generous TPU and GCP grants. |
| Software Dependencies | No | The paper mentions software like TensorFlow [24], PyTorch [25], and Scikit-Learn [22] but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | For the search phase, we randomly sample 128 architectures from each search space and train them for 25 epochs with Adam with a learning rate of 0.001. |