TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets
Authors: Chengrun Yang, Gabriel Bender, Hanxiao Liu, Pieter-Jan Kindermans, Madeleine Udell, Yifeng Lu, Quoc V Le, Da Huang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results on several tabular datasets demonstrate the superiority of Tab NAS over previous reward-shaping methods: it finds better models that obey the constraints. |
| Researcher Affiliation | Collaboration | Chengrun Yang1, Gabriel Bender1, Hanxiao Liu1, Pieter-Jan Kindermans1, Madeleine Udell2, Yifeng Lu1, Quoc V. Le1, Da Huang1 {chengrun, gbender, hanxiaol, pikinder}@google.com, udell@stanford.edu, {yifenglu, qvl, dahua}@google.com 1 Google Research, Brain Team 2 Stanford University |
| Pseudocode | Yes | detailed pseudocode is provided as Algorithm 2 in Appendix B. |
| Open Source Code | Yes | Our implementation can be found at https://github.com/google-research/tabnas. |
| Open Datasets | Yes | The datasets are publicly available. We also provide pseudocode and full details of our hyperparameters to reproduce our results in Table A1 and A2. |
| Dataset Splits | Yes | To avoid overfitting, we split the labelled portion of a dataset into training and validation splits. Weight updates are carried out on the training split; RL updates are performed on the validation split. |
| Hardware Specification | Yes | We ran all experiments using Tensor Flow on a Cloud TPU v2 with 8 cores. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' and briefly references 'Py Torch' but does not specify version numbers for these or any other software dependencies crucial for reproducibility. |
| Experiment Setup | Yes | More details of experiment setup and results in other search spaces can be found in Appendix C and D. |