Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the Power of Differentiable Learning versus PAC and SQ Learning
Authors: Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, Nathan Srebro
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the power of learning via mini-batch stochastic gradient descent (SGD) on the population loss, and batch Gradient Descent (GD) on the empirical loss, of a differentiable model or neural network, and ask what learning problems can be learnt using these paradigms. We show that SGD and GD can always simulate learning with statistical queries (SQ), but their ability to go beyond that depends on the precision ฯ of the gradient calculations relative to the minibatch size b (for SGD) and sample size m (for GD). With ๏ฌne enough precision relative to minibatch size, namely when bฯ is small enough, SGD can go beyond SQ learning and simulate any sample-based learning algorithm and thus its learning power is equivalent to that of PAC learning; this extends prior work that achieved this result for b = 1. Similarly, with ๏ฌne enough precision relative to the sample size m, GD can also simulate any sample-based learning algorithm based on m samples. In particular, with polynomially many bits of precision (i.e. when ฯ is exponentially small), SGD and GD can both simulate PAC learning regardless of the mini-batch size. On the other hand, when bฯ2 is large enough, the power of SGD is equivalent to that of SQ learning. Our main result, given below as a four-part Theorem, establishes the power of b SGD learning relative to PAC (i.e. arbitrary sample based) and SQ learning. |
| Researcher Affiliation | Collaboration | Emmanuel Abbe EPFL email Pritish Kamath Google Research EMAIL Eran Malach HUJI email Colin Sandon MIT EMAIL Nathan Srebro TTIC EMAIL |
| Pseudocode | Yes | Algorithm 1 SAMPLE-EXTRACT b SQ algorithm (lines in green contain high-level idea of algorithm) Input: Batch size b, Tolerance ฯ satisfying bฯ < 1/2. Output: Sample z D |
| Open Source Code | No | The paper does not provide any concrete access (link, explicit statement of release) to open-source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with specific datasets, thus no information about public dataset availability for training is provided. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments with specific dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments, thus no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any experimental setup or software dependencies with specific version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings. |