Efficient Gradient Approximation Method for Constrained Bilevel Optimization
Authors: Siyuan Xu, Minghui Zhu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically verify the efficacy of the proposed algorithm by conducting experiments on hyperparameter optimization and meta-learning. |
| Researcher Affiliation | Academia | Siyuan Xu, Minghui Zhu* School of Electrical Engineering and Computer Science The Pennsylvania State University, University Park, USA {spx5032, muz16}@psu.edu |
| Pseudocode | Yes | Algorithm 1: Gradient Approximation Method |
| Open Source Code | No | The paper does not provide concrete access to source code, such as a specific repository link, an explicit code release statement, or code in supplementary materials. |
| Open Datasets | Yes | We conduct the experiment on linear SVM and kernelized SVM on the dataset of diabetes in (Dua and Graff 2017). We formulate the problem as a HO of SVM and conduct experiments on a breast cancer dataset (Dua and Graff 2017). In the experiment, we compare our algorithm with the optimization in Meta Opt Net on datasets CIFAR-FS (Bertinetto et al. 2018) and FC100 (Oreshkin, Rodr ıguez L opez, and Lacoste 2018), which are widely used for few-shot learning. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. While it mentions training and testing, it doesn't specify the splits themselves. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We provide details of the problem formulation and the implementation setting in Appendix B.1. The problem formulation and the implementation setting are provided in Appendix B.1. Appendix B.2 provides details of the problem formulation and the experiment setting. The two algorithms share all training configurations, including the network structure, the learning rate in each epoch and the batch size. |