Optimistic Rates for Multi-Task Representation Learning

Authors: Austin Watkins, Enayat Ullah, Thanh Nguyen-Tang, Raman Arora

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of transfer learning via Multi-Task Representation Learning (MTRL), wherein multiple source tasks are used to learn a good common representation, and a predictor is trained on top of it for the target task. Under standard regularity assumptions on the loss function and task diversity, we provide new statistical rates on the excess risk of the target task, which demonstrate the benefit of representation learning.
Researcher Affiliation Academia Austin Watkins Johns Hopkins University Baltimore, MD 21218 awatki29@jhu.edu Enayat Ullah Johns Hopkins University Baltimore, MD 21218 enayat@jhu.edu Thanh Nguyen-Tang Johns Hopkins University Baltimore, MD 21218 nguyent@cs.jhu.edu Raman Arora Johns Hopkins University Baltimore, MD 21218 arora@cs.jhu.edu
Pseudocode No The paper does not contain pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the release of open-source code.
Open Datasets No The paper is theoretical and does not conduct experiments, therefore, it does not mention publicly available or open datasets for training.
Dataset Splits No The paper is theoretical and does not conduct experiments, therefore, no dataset splits for training, validation, or testing are provided.
Hardware Specification No The paper is theoretical and does not conduct experiments, therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not conduct experiments, therefore, no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not conduct experiments, therefore, no experimental setup details like hyperparameters or training settings are provided.