Tight Convergence Rate of Gradient Descent for Eigenvalue Computation

Authors: Qinghua Ding, Kaiwen Zhou, James Cheng

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Besides, we also give empirical justifications of our convergence rates over synthetic and real data. We validate our convergence results on both synthetic and real datasets. The empirical iteration complexity matches with our iteration complexity bound, and is much better than that proved in [Xu et al., 2018].
Researcher Affiliation Academia Qinghua Ding , Kaiwen Zhou , James Cheng Department of Computer Science and Engineering, The Chineses University of Hong Kong {qhding, kwzhou, jcheng}@cse.cuhk.edu.hk
Pseudocode Yes Riemannian gradient descent: g(xt) = (I xtx T t )Axt, xt+1 = R(xt, ηg(xt)). Oja s rule : xt+1 = xt + ηg(xt). Oja s flow : x(t) = g(x(t)).
Open Source Code No The paper does not include an unambiguous statement or a direct link to open-source code for the described methodology.
Open Datasets Yes Moreover, we also validate our convergence results on the Schenk2 dataset used by [Xu et al., 2018] for fair comparision. Schenk dataset provides a 10,728 10,728 PSD sparse matrix with 85,000 non-zeros in it.
Dataset Splits No The paper mentions using 'synthetic and real data' and the 'Schenk dataset' but does not provide specific train/validation/test splits, percentages, or methods for data partitioning to reproduce the experiments.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No The paper mentions re-implementing experiments from [Xu et al., 2018] and using 'the same setting' for n=1000 and A=UΣU T, but it does not provide specific hyperparameter values, learning rates, batch sizes, or other detailed training configurations within the text.