Regression Learning with Limited Observations of Multivariate Outcomes and Features

Authors: Yifan Sun, Grace Yi

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive numerical experiments show that our approach outperforms methods that apply existing algorithms for univariate outcome individually to each coordinate of multivariate outcomes in a naive manner. Further, utilizing the L1 loss function or introducing a Lasso-type penalty can enhance predictions in the presence of outliers or high dimensional features. This research contributes valuable insights into addressing the challenges posed by incomplete data. and 5. Experiments section
Researcher Affiliation Academia 1Department of Statistical and Actuarial Sciences, University of Western Ontario, London, Canada 2Department of Computer Science, University of Western Ontario, London, Canada.
Pseudocode Yes Algorithm 1 Multivariate Least Squares Ridge Regression, Algorithm 2 Multivariate Least Squares Lasso Regression, Algorithm 3 Multivariate Least Absolute Deviations Ridge Regression, Algorithm 4 Multivariate Least Absolute Deviations Lasso Regression, Algorithm 5 Multivariate AERR
Open Source Code No The paper does not contain any explicit statement about releasing the source code for the methodology or a link to a code repository.
Open Datasets Yes We apply the proposed method to the yeast cell dataset, available from R package spls. The dataset can be accessed at http://jeffgoldsmith.com/IWAFDA/shortcourse_data.html.
Dataset Splits Yes We randomly split the entire sample into training data and test data using 10 fold cross validation.
Hardware Specification No The paper does not provide specific details on the hardware used for running the experiments.
Software Dependencies No The paper mentions 'R package mice' and 'R package spls' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes In implementing all methods, B is set to be 100. By Theorems 3.1, the optimal step size for Algorithm 1 is 2(p0 1) / (T p (1+q/q0)) . Hence, the step size for LSR is set to 2(p0 1) / (T p (1+q/q0)). The two tuning parameters in (7), λ1 and λ2, are set to 0.1 and 0.001, respectively.