A Generalized Weighted Optimization Method for Computational Learning and Inversion

Authors: Kui Ren, Yunan Yang, Björn Engquist

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical we analyze a generalized weighted least-squares optimization method for computational learning and inversion with noisy data. ... Here, we characterize the impact of the weighting scheme on the generalization error of the learning method, where we derive explicit generalization errors for the random Fourier feature model in both the underand over-parameterized regimes. For more general feature maps, error bounds are provided based on the singular values of the feature matrix.
Researcher Affiliation Academia Bj orn Engquist The University of Texas at Austin Austin, TX 78712, USA engquist@oden.utexas.edu Kui Ren Columbia University New York, NY 10027, USA kr2002@columbia.edu Yunan Yang ETH Z urich Z urich, Switzerland yyn0410@gmail.com
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks. The methods are described mathematically.
Open Source Code No The paper does not include any statement about open-sourcing code or provide a link to a code repository.
Open Datasets No The paper states:
Dataset Splits No The paper does not specify training, validation, and test dataset splits. It focuses on theoretical analysis of generalization error.
Hardware Specification No The paper does not provide any specific hardware details used for the numerical calculations or theoretical analysis.
Software Dependencies No The paper does not list any specific software dependencies with version numbers used for its analysis or numerical calculations.
Experiment Setup No The paper focuses on theoretical analysis and derivations of generalization errors. It does not describe an empirical experimental setup with hyperparameters or training configurations.