John Ellipsoids via Lazy Updates

Authors: David Woodruff, Taisuke Yasuda

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give a faster algorithm for computing an approximate John ellipsoid around 𝑛 points in 𝑑dimensions. The best known prior algorithms are based on repeatedly computing the leverage scores of the points and reweighting them by these scores [CCLY19]. We show that this algorithm can be substantially sped up by delaying the computation of high accuracy leverage scores by using sampling, and then later computing multiple batches of high accuracy leverage scores via fast rectangular matrix multiplication. We also give low-space streaming algorithms for John ellipsoids using similar ideas.
Researcher Affiliation Collaboration David P. Woodruff Carnegie Mellon University dwoodruf@cs.cmu.edu Taisuke Yasuda Voleon Group yasuda.taisuke1@gmail.com
Pseudocode Yes Algorithm 1 Streaming John ellipsoids via lazy updates; Algorithm 2 Approximating the quadratics; Algorithm 3 John ellipsoids via lazy updates
Open Source Code No The paper does not contain any statement about releasing source code or a link to a code repository.
Open Datasets No The paper is theoretical and does not involve experiments on datasets. Therefore, it does not mention any publicly available or open datasets.
Dataset Splits No The paper is theoretical and does not involve experiments, so it does not discuss training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any computational experiments. Thus, no hardware specifications are provided.
Software Dependencies No The paper is theoretical and focuses on algorithm design and analysis. It does not describe any computational experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and analysis. It does not describe any empirical experiments, and therefore no experimental setup details like hyperparameters or training configurations are provided.