An ADMM Based Framework for AutoML Pipeline Configuration
Authors: Sijia Liu, Parikshit Ram, Deepak Vijaykeerthy, Djallel Bouneffouf, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew Conn, Alexander Gray4892-4899
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate the flexibility (in utilizing existing Auto ML techniques), effectiveness (against open source Auto ML toolkits), and unique capability (of executing Auto ML with practically motivated black-box constraints) of our proposed scheme on a collection of binary classification data sets from UCI ML & Open ML repositories. |
| Researcher Affiliation | Industry | Sijia Liu, Parikshit Ram, Deepak Vijaykeerthy, Djallel Bouneffouf, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew Conn, Alexander Gray IBM Research AI |
| Pseudocode | Yes | Algorithm 1 Operator splitting from ADMM to solve problem (5) (...) Algorithm 2 Operator splitting from ADMM to solve problem (14) (with black-box constraints) |
| Open Source Code | No | The paper does not provide an explicit statement or a link indicating that the source code for their proposed ADMM-based framework is openly available. |
| Open Datasets | Yes | We consider 30 binary classification3datasets from the UCI ML (Asuncion and Newman 2007) & Open ML repositories (Bischl and others 2017), and Kaggle. |
| Dataset Splits | Yes | We consider (1 AUROC) (area under the ROC curve) as the black-box objective and evaluate it on a 80-20% train-validation split for all baselines. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using "scikit-learn algorithms" and refers to "Auto-sklearn" and "TPOT" toolkits, but does not specify version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | For ADMM, we utilize BO for (θ-min) and CMAB for (z-min) ADMM(BO,Ba)5. In this setup, ADMM has 2 parameters: (i) the penalty ρ on the augmented term, (ii) the loss upper-bound ˆf in the CMAB algorithm (Appendix 4). We evaluate the sensitivity of ADMM on these parameters in Appendix 9. The results indicate that ADMM is fairly robust to these parameters, and hence set ρ = 1 and ˆf = 0.7 throughout. We start the ADMM optimization with λ(0) = 0. |