Invariant subspaces and PCA in nearly matrix multiplication time

Authors: Aleksandros Sobczyk, Marko Mladenovic, Mathieu Luisier

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical To achieve such provable forward-error guarantees, our methods rely on a new O(nω+η) stability analysis for the Cholesky factorization, and a smoothed analysis for computing spectral gaps, which can be of independent interest. Ultimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.
Researcher Affiliation Collaboration Aleksandros Sobczyk IBM Research and ETH Zurich obc@zurich.ibm.com Marko Mladenovi c ETH Zurich mmladenovic@ethz.ch Mathieu Luisier ETH Zurich mluisier@iis.ee.ethz.ch
Pseudocode Yes Algorithm 1: PROJECTOR., Algorithm 2: CHOLESKY(M)., Algorithm 3: PURIFY., Algorithm 4: REDUCE., Algorithm 5: REGULARIZE., Algorithm 6: DENSITY., Algorithm 8: Block-Krylov Iteration (Alg. 2 of [106])
Open Source Code No The paper does not contain any statement about releasing source code or provide a link to a code repository.
Open Datasets No The paper is theoretical and does not conduct experiments with datasets. It refers to applications in machine learning (PCA) and scientific computing (DFT) that typically use data, but it does not specify or provide access information for any particular dataset used for empirical evaluation.
Dataset Splits No The paper is purely theoretical and does not conduct experiments with dataset splits for validation.
Hardware Specification No The paper is theoretical and does not report on experimental runs; thus, no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not provide details on specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with specific hyperparameter values or training configurations.