Approximation-Generalization Trade-offs under (Approximate) Group Equivariance
Authors: Mircea Petrache, Shubhendu Trivedi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we conduct a formal unified investigation of these intuitions. To begin, we present general quantitative bounds that demonstrate how models capturing task-specific symmetries lead to improved generalization. |
| Researcher Affiliation | Academia | UC Chile, Fac. de Matemáticas, & Inst. de Ingeniería Matematica y Computacional, Av. Vicuña Mackenna 4860, Santiago, 6904441, Chile. mpetrache@mat.uc.cl. shubhendu@csail.mit.edu. |
| Pseudocode | No | The paper contains mathematical derivations and proofs but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper is theoretical and focuses on mathematical derivations; it does not mention releasing any open-source code for the described methodology or provide links to code repositories. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with specific datasets. While it discusses data distributions, it does not refer to any publicly available or open datasets by name, link, or citation. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with dataset splits. Therefore, no information regarding training, validation, or test dataset splits is provided. |
| Hardware Specification | No | The paper is purely theoretical and does not describe any computational experiments or specify the hardware used to perform any analyses or derivations. |
| Software Dependencies | No | The paper is theoretical and does not describe computational experiments that would require a list of specific software dependencies with version numbers for reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not involve empirical experiments. Therefore, it does not provide details about experimental setup, hyperparameters, or system-level training settings. |