Mixed Linear Regression with Multiple Components

Authors: Kai Zhong, Prateek Jain, Inderjit S. Dhillon

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Numerical Experiments In this section, we use synthetic data to show the properties of our algorithm that minimizes Eq. (1), which we call LOSCO (LOcally Strongly Convex Objective). We generate data points and parameters from standard normal distribution. We set K = 3 and pk = 1/3 for all k 2 [K]. The error is defined as (t) = min 2Perm([K]){maxk2[K] kw(t) kk}, where Perm([K]) is the set of all the permutation functions on the set [K]. The errors reported in the paper are averaged over 10 trials. ... Fig. 1(a) shows the recovery rate for different dimensions and different samples. ... Table 1: Time (sec.) comparison for different subspace clustering methods
Researcher Affiliation Collaboration Kai Zhong 1 Prateek Jain 2 Inderjit S. Dhillon 3 1,3 University of Texas at Austin 2 Microsoft Research India 1 zhongkai@ices.utexas.edu, 2 prajain@microsoft.com 3 inderjit@cs.utexas.edu
Pseudocode Yes Algorithm 1 Initialization for MLR via Tensor Method... Algorithm 2 Gradient Descent for MLR... Algorithm 3 Power Method for SC
Open Source Code No The paper mentions using 'publicly available codes for all the other methods' (referring to baseline comparisons) but does not state that the code for its own described methodology is open-source, provide a link, or indicate its availability.
Open Datasets No The paper states: 'In this section, we use synthetic data to show the properties of our algorithm... We generate data points and parameters from standard normal distribution.' As the data is synthetically generated and no access information is provided, it is not a publicly available dataset.
Dataset Splits No The paper does not explicitly specify exact dataset split percentages or sample counts for training, validation, or test sets. It uses synthetic data and does not refer to standard predefined splits.
Hardware Specification No The paper only states 'The experiments are conducted in Matlab.' and does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions 'Matlab' and refers to a 'robust tensor power method' but does not provide specific version numbers for these or any other software components or libraries, which are necessary for a reproducible description of ancillary software.
Experiment Setup Yes We set K = 3 and pk = 1/3 for all k 2 [K]. The error is defined as (t) = min 2Perm([K]){maxk2[K] kw(t) kk}, where Perm([K]) is the set of all the permutation functions on the set [K]. The errors reported in the paper are averaged over 10 trials. ... We set both of two parameters in the robust tensor power method (denoted as N and L in Algorithm 1 in [2]) to be 100.