On Hypothesis Transfer Learning of Functional Linear Models

Authors: Haotian Lin, Matthew Reimherr

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed algorithms is demonstrated via extensive synthetic data as well as real-world data applications.
Researcher Affiliation Academia 1Department of Statistics, The Pennsylvania State University, University Park, PA, USA. Correspondence to: Haotian Lin <hzl435@psu.edu>.
Pseudocode Yes Algorithm 1 TL-FLR
Open Source Code Yes 1The R code and the application datasets are available in https://github.com/haotianlin/HTL-FLM.
Open Datasets Yes We consider the Human Activity Recognition (HAR) dataset (Anguita et al., 2013)
Dataset Splits Yes We randomly split the target sector into the train (80%) and test (20%) set and report the ratio of the four approaches prediction errors to OFLR s on the test set.
Hardware Specification No The paper does not explicitly mention any specific hardware used for running the experiments (e.g., GPU models, CPU types, or cloud computing instance details).
Software Dependencies No The paper mentions 'The R code' in a footnote, indicating the programming language used, but it does not specify any particular software libraries, packages, or solvers with their version numbers.
Experiment Setup Yes For each algorithm, we set the regularization parameters as λ1 and λ2 as the optimal values in Theorem 4.3 and select the constants using crossvalidation.