Efficient Riemannian Meta-Optimization by Implicit Differentiation
Authors: Xiaomeng Fan, Yuwei Wu, Zhi Gao, Yunde Jia, Mehrtash Harandi3733-3740
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations of three optimization problems on different Riemannian manifolds show that our method achieves state-of-the-art performance in terms of the convergence speed and the quality of optima. Experiments were conducted on three tasks: principal component analysis (PCA) on the Grassmann manifold, face Recognition on the Stiefel Manifold, and clustering on the SPD manifold... |
| Researcher Affiliation | Collaboration | 1 Beijing Laboratory of Intelligent Information Technology School of Computer Science, Beijing Institute of Technology, Beijing, China 2 Department of Electrical and Computer Systems Eng., Monash University, and Data61, Australia |
| Pseudocode | Yes | Algorithm 1 Parameter Warmup stage |
| Open Source Code | Yes | The code is available at https://github.com/XiaomengFanmcislab/I-RMM. |
| Open Datasets | Yes | We used MNIST dataset to evaluate our method on the PCA task. We utilized the Yale B dataset (Lee, Ho, and Kriegman 2005) to conduct this experiment. We also conducted experiments on the clustering task of SPD representations by utilizing the Kylberg texture dataset (Kylberg 2011). |
| Dataset Splits | Yes | LV (X(T )) is the loss function of the updated Riemannian parameter X(T ) on validation data. |
| Hardware Specification | No | The paper mentions 'GPU memory consumption' but does not provide specific details on the hardware used (e.g., GPU model, CPU, RAM). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Require: Initial optimization state S(0) = 0, initial parameters ϕ of our optimizer, maximum iteration T of the inner-loop, maximum iteration Υ of the outer-loop, and hyperparameter B to update the parameter pool. Table 1: Training time (seconds) comparisons on the PCA task (showing specific Inner Loop Steps values). |