Robust Meta-learning for Mixed Linear Regression with Small Batches

Authors: Weihao Kong, Raghav Somani, Sham Kakade, Sewoong Oh

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation results supporting our theoretical prediction are shown in Fig. 2. For the analysis and the experimental setup we refer to K.
Researcher Affiliation Collaboration kweihao@gmail.com. University of Washington raghavs@cs.washington.edu. University of Washington sham@cs.washington.edu. University of Washington & Microsoft Research sewoong@cs.washington.edu. University of Washington
Pseudocode Yes Algorithm 1 Meta-learning ... Algorithm 2 Robust subspace estimation ... Algorithm 3 Double-Filtering
Open Source Code No The paper does not provide any statement or link regarding the availability of its source code.
Open Datasets No The paper describes a generative model for synthetic data (e.g., 'xi,j N(0, Id) and ϵi,j N(0, σ2 i )') and refers to 'meta-train dataset' as a collection of tasks. It does not mention the use of any specific publicly available datasets with concrete access information for training.
Dataset Splits No The paper describes theoretical data batches (DL1, DL2, DH) for analysis and refers to 'achievable accuracy' in theoretical terms, but does not provide details on empirical train/validation/test splits used for experimental reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software names with version numbers (e.g., programming languages, libraries, frameworks) used for the experiments.
Experiment Setup No The paper mentions 'For the analysis and the experimental setup we refer to K.' (Section K is in the supplementary material and not provided). Without access to Section K, no specific hyperparameters, training configurations, or system-level settings are detailed in the main text.