Universality in Transfer Learning for Linear Models

Authors: Reza Ghane, Danil Akhtiamov, Babak Hassibi

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 5, we validate our theoretical results through empirical experiments.
Researcher Affiliation Academia Reza Ghane Department of Electrical Engineering California Institute of Technology Pasadena, CA 91125 rghanekh@caltech.edu Danil Akhtiamov Department of Computing + Mathematical Sciences California Institute of Technology Pasadena, CA 91125 dakhtiam@caltech.edu Babak Hassibi Department of Electrical Engineering Department of Computing + Mathematical Sciences California Institute of Technology Pasadena, CA 91125 hassibi@caltech.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code is attached to the supplementary material.
Open Datasets No The paper describes generating synthetic data from specified distributions (N(0,1), Ber(0.5), chi2(1)) rather than using a named, publicly accessible dataset with concrete access information.
Dataset Splits No The paper generates synthetic data for different values of `n` and `d` but does not specify explicit train/validation/test dataset splits (e.g., percentages or counts) as it relates to a fixed dataset.
Hardware Specification Yes We used CVXPY (Grant and Boyd [2014], Agrawal et al. [2018]) to solve (1) efficiently on a Laptop CPU.
Software Dependencies Yes We used CVXPY (Grant and Boyd [2014], Agrawal et al. [2018]) to solve (1) efficiently on a Laptop CPU.
Experiment Setup Yes To do so, we fixed 𝑑= 1000 and varied 𝑛across different values... w0 is chosen according to Assumptions 3 in such a manner that π‘’π‘Ž= 1... We also sampled the means Β΅1 and Β΅2 from N (0, 1/𝑑I𝑑) with a cross-correlation π‘Ÿ= E[Β΅1𝑖¡2𝑖] = 0.9... For Figures 3, we fixed 𝜌= 0.8, 2, 5 respectively...