Provable Pathways: Learning Multiple Tasks over Multiple Paths

Authors: Yingcong Li, Samet Oymak

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments support our theory and verify the benefits of multipath representations. Finally, we also highlight multiple future directions.
Researcher Affiliation Academia 1 University of California, Riverside, 2 University of Michigan, Ann Arbor {yli692@, oymak@ece.}ucr.edu
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets No The paper describes generating synthetic data for experiments (e.g., "generate B1 and {Bk 2}K k=1 with orthonormal rows uniformly at random independently.") but does not provide access information for the generated dataset or the code to reproduce its generation.
Dataset Splits No The paper specifies 'N' samples per task for MTL training and 'M' samples for transfer learning, but it does not explicitly detail training, validation, and test splits for a single dataset.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not provide specific software names with version numbers.
Experiment Setup Yes We set ambient dimension p = 32, shared embedding R = 8, and cluster embeddings r = 2. We consider a base configuration of K = 40 clusters, T = T/K = 10 tasks per cluster and N = 10 samples per task (see supplementary material for further details). ... In the experiment, we set γ = 0.6 to make sure hindsight knowledge of θ t is sufficient to correctly cluster all tasks.