H-Consistency Bounds for Surrogate Loss Minimizers

Authors: Pranjal Awasthi, Anqi Mao, Mehryar Mohri, Yutao Zhong

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we report the results of simulations illustrating our bounds and their tightness.
Researcher Affiliation Collaboration 1Google Research, New York, NY; 2Courant Institute of Mathematical Sciences, New York, NY. Correspondence to: Anqi Mao <aqmao@cims.nyu.edu>, Yutao Zhong <yutao@cims.nyu.edu>.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing open-source code for the methodology described.
Open Datasets No The paper mentions generating synthetic data for simulations ("We generate data points x R on [ 1,+1]") but does not refer to a publicly available or open dataset.
Dataset Splits No The paper mentions using 10^7 i.i.d. samples for approximation but does not specify any training/validation/test dataset splits.
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments, such as CPU or GPU models.
Software Dependencies No The paper does not specify any software dependencies or their version numbers.
Experiment Setup No The paper describes generating synthetic data and approximating risks (Section 7), but does not provide specific experimental setup details such as hyperparameters, learning rates, or optimizer settings, as it appears to rely on analytical calculations and simulations rather than typical machine learning model training.