Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Coresets for Multiple $\ell_p$ Regression

Authors: David Woodruff, Taisuke Yasuda

ICML 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Coresets for Multiple ℓ𝑝Regression David P. Woodruff 1 Taisuke Yasuda 1 A coreset of a dataset with 𝑛examples and 𝑑 features is a weighted subset of examples that is sufficient for solving downstream data analytic tasks. Nearly optimal constructions of coresets for least squares and ℓ𝑝linear regression with a single response are known in prior work. However, for multiple ℓ𝑝regression where there can be π‘šresponses, there are no known constructions with size sublinear in π‘š. In this work, we construct coresets of size 𝑂(πœ€ 2𝑑) for 𝑝< 2 and 𝑂(πœ€ 𝑝𝑑𝑝/2) for 𝑝> 2 independently of π‘š(i.e., dimension-free) that approximate the multiple ℓ𝑝 regression objective at every point in the domain up to (1 πœ€) relative error. If we only need to preserve the minimizer subject to a subspace constraint, we improve these bounds by an πœ€factor for all 𝑝> 1. All of our bounds are nearly tight.
Researcher Affiliation Academia 1School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks in the main text. It has descriptions of algorithms in prose and mathematical derivations.
Open Source Code No No explicit statement about making the source code for the described methodology publicly available, nor a link to a repository.
Open Datasets Yes from keras.datasets import mnist import numpy as np import matplotlib.pyplot as plt import tensorflow as tf np.random.seed(2024) (train_X, train_y), (test_X, test_y) = mnist.load_data() train_X = train_X.reshape(len(train_X), -1) train_X = train_X / np.max(train_X) n, d = train_X.shape
Dataset Splits No The paper mentions `train_X` and `test_X` from the MNIST dataset but does not specify any training/test/validation splits or percentages for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments. It only presents the experiment code in Python.
Software Dependencies Yes The experiment code section F.1 lists the following libraries: 'keras.datasets', 'numpy', 'matplotlib.pyplot', 'tensorflow'. It doesn't specify versions, but these are widely used and version-inferable for basic reproduction.
Experiment Setup Yes def run(train_ds, max_iter=200, p=1): ... x = tf.Variable(initial_value=x0) opt = tf.keras.optimizers.Adam(learning_rate=0.5) ... while opt.iterations < max_iter: ... sample_sizes = [100, 500, 1000, 5000, 10000] for m in [100, 500]: ...