Optimal Hessian/Jacobian-Free Nonconvex-PL Bilevel Optimization
Authors: Feihu Huang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct some numerical experiments on the bilevel PL game and hyper-representation learning task to demonstrate efficiency of our proposed method. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China 2MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China. |
| Pseudocode | Yes | Algorithm 1 Hessian/Jacobian-free Bilevel Optimization (i.e, HJFBi O) Algorithm |
| Open Source Code | No | No explicit statement or link to open-source code for the described methodology is provided in the paper. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments ('samples {pi}n i=1, {qi}n i=1, {r1 i }n i=1 and {r2 i }n i=1 are independently drawn from normal distributions' and 'We randomly generate n = 30 d samples of sensing matrices {Ci}n i=1 from standard normal distribution, and then compute the corresponding no-noise labels oi = Ci, H '). It does not provide access information for a publicly available or open dataset. |
| Dataset Splits | Yes | We split all samples into two dataset: a train dataset Dt with 40% data and a validation dataset Dv with 60% data. |
| Hardware Specification | No | No specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running the experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers are provided in the paper. |
| Experiment Setup | Yes | For fair comparison, we set a basic learning rate as 0.01 for all algorithms. In our HJFBi O method, we set δϵ = 10 5. |