Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness
Authors: Ye Tian, Yuqi Gu, Yang Feng
JMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive numerical experiments to validate our theoretical findings. |
| Researcher Affiliation | Academia | Ye Tian EMAIL Department of Statistics Columbia University New York, NY 10027, USA Yuqi Gu EMAIL Department of Statistics Columbia University New York, NY 10027, USA Yang Feng EMAIL Department of Biostatistics, School of Global Public Health New York University New York, NY 10003, USA |
| Pseudocode | Yes | Algorithm 1: Penalized ERM Algorithm 2: Spectral Method Algorithm 3: Adaptation to unknown intrinsic dimension r |
| Open Source Code | Yes | The code to reproduce the results is available at https://github.com/ytstat/RL-MTL-TL. |
| Open Datasets | Yes | In this subsection, we applied different approaches to a real data set, Human Activity Recognition (HAR) Using a Smartphones Data Set. ... The original data set is available at UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/human+ activity+recognition+using+smartphones. |
| Dataset Splits | Yes | For each task, in each replication, we used 50% of the samples as training data and held 50% of the sample as test data. |
| Hardware Specification | Yes | The experiments were run on the Terremoto HPC Cluster of Columbia University with a CPU Intel Xeon Gold 6126 2.6 GHz. We used a single core with 3 GB of memory when running each method. |
| Software Dependencies | No | All the experiments were implemented in Python. For penalized ERM ( p ERM , Algorithm 1), we used the automatic differentiation implemented in Py Torch (Paszke et al., 2019) along with the Adam optimizer (Kingma and Ba, 2015) to solve the optimization problem (3) in Step 1. ... The method was implemented in an R package RMTL (Cao et al., 2019), and we used the Python package rpy2 to call functions cv.MTL and MTL in the R package RMTL. No specific version numbers for Python, PyTorch, or RMTL are provided. |
| Experiment Setup | Yes | Consistent with our theory, we set penalty parameters λ = p r(p + log T) and γ = p + log T in p ERM, and γ = 0.5 p + log T in the spectral method. ... We set the learning rate equal to 0.01 in torch.optim.Adam function and kept all the other parameter choices as in default. |