$H$-Consistency Guarantees for Regression

Authors: Anqi Mao, Mehryar Mohri, Yutao Zhong

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report favorable experimental results in Section 6. In this section, we demonstrate empirically the effectiveness of the smooth adversarial regression algorithms introduced in the previous section.
Researcher Affiliation Collaboration 1Courant Institute of Mathematical Sciences, New York, NY; 2Google Research, New York, NY.
Pseudocode No The paper describes methods and theoretical derivations but does not include any pseudocode or explicitly labeled algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets Yes We studied two real-world datasets: the Diabetes dataset (Efron et al., 2004) and the Diverse MAGIC wheat dataset (Scott et al., 2021)
Dataset Splits No The paper mentions training and testing but does not explicitly provide details about specific training/validation/test dataset splits, proportions, or cross-validation setup.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions the 'CVXPY library (Diamond & Boyd, 2016)' but does not specify a version number for the library itself.
Experiment Setup Yes For our smooth adversarial regression losses (2), we chose L = ℓ2, the squared loss, and L = ℓδ with δ = 0.2, the Huber loss, setting τ = 1 as the default. Other choices for the regression loss functions and the value of τ may yield better performance, which can typically be selected by cross-validation in practice. Both our smooth adversarial regression losses and the adversarial squared loss were optimized using the CVXPY library (Diamond & Boyd, 2016).