Hyper-parameter Tuning under a Budget Constraint
Authors: Zhiyun Lu, Liyu Chen, Chao-Kai Chiang, Fei Sha
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results show that our method outperforms existing algorithms, including the-state-of-the-art one, on real-world tuning tasks across a range of different budgets. |
| Researcher Affiliation | Collaboration | Zhiyun Lu1 , Liyu Chen1 , Chao-Kai Chiang2 and Fei Sha3 1University of Southern California 2Appier Inc. 3 Google AI |
| Pseudocode | Yes | Algorithm 1: Budgeted Hyperparameter Tuning (BHPT) |
| Open Source Code | No | The paper provides a supplementary material link, but it does not explicitly state that the source code for their methodology is available through this link or any other means. |
| Open Datasets | Yes | Res Net on CIFAR-10, FCNet on MNIST, VAE on MNIST |
| Dataset Splits | No | The paper mentions evaluating on a 'heldout set periodically' during training but does not provide specific details on how the data was split for training, validation, and testing (e.g., percentages, sample counts, or explicit standard splits for validation). |
| Hardware Specification | No | The paper does not mention any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software names with version numbers, such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | No | While the paper discusses hyper-parameter tuning and mentions parameters like 'learning rate' and 'architecture type', it does not provide concrete values or specific configurations for these hyperparameters or other system-level training settings used in their experiments. |