Coresets for Multiple $\ell_p$ Regression

Authors: David Woodruff, Taisuke Yasuda

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Coresets for Multiple โ„“๐‘Regression David P. Woodruff 1 Taisuke Yasuda 1 A coreset of a dataset with ๐‘›examples and ๐‘‘ features is a weighted subset of examples that is sufficient for solving downstream data analytic tasks. Nearly optimal constructions of coresets for least squares and โ„“๐‘linear regression with a single response are known in prior work. However, for multiple โ„“๐‘regression where there can be ๐‘šresponses, there are no known constructions with size sublinear in ๐‘š. In this work, we construct coresets of size ๐‘‚(๐œ€ 2๐‘‘) for ๐‘< 2 and ๐‘‚(๐œ€ ๐‘๐‘‘๐‘/2) for ๐‘> 2 independently of ๐‘š(i.e., dimension-free) that approximate the multiple โ„“๐‘ regression objective at every point in the domain up to (1 ๐œ€) relative error. If we only need to preserve the minimizer subject to a subspace constraint, we improve these bounds by an ๐œ€factor for all ๐‘> 1. All of our bounds are nearly tight.
Researcher Affiliation Academia 1School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks in the main text. It has descriptions of algorithms in prose and mathematical derivations.
Open Source Code No No explicit statement about making the source code for the described methodology publicly available, nor a link to a repository.
Open Datasets Yes from keras.datasets import mnist import numpy as np import matplotlib.pyplot as plt import tensorflow as tf np.random.seed(2024) (train_X, train_y), (test_X, test_y) = mnist.load_data() train_X = train_X.reshape(len(train_X), -1) train_X = train_X / np.max(train_X) n, d = train_X.shape
Dataset Splits No The paper mentions `train_X` and `test_X` from the MNIST dataset but does not specify any training/test/validation splits or percentages for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments. It only presents the experiment code in Python.
Software Dependencies Yes The experiment code section F.1 lists the following libraries: 'keras.datasets', 'numpy', 'matplotlib.pyplot', 'tensorflow'. It doesn't specify versions, but these are widely used and version-inferable for basic reproduction.
Experiment Setup Yes def run(train_ds, max_iter=200, p=1): ... x = tf.Variable(initial_value=x0) opt = tf.keras.optimizers.Adam(learning_rate=0.5) ... while opt.iterations < max_iter: ... sample_sizes = [100, 500, 1000, 5000, 10000] for m in [100, 500]: ...