Frank-Wolfe-based Algorithms for Approximating Tyler's M-estimator

Authors: Lior Danon, Dan Garber

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to give some demonstration for the empirical performance of our Frank-Wolfe-based algorithms, we conducted two types of experiments that closely follow those in [28] (Chapter 3) with some minor changes, which consider a Gaussian distribution with outlier contamination, and a heavy-tailed multivariate t-distribution. [...] In Figure 1 we report the distance in spectral norm (in log scale) of the iterates of the different methods form Q , and in Figure 2 we report the approximation error f(Qt) f(Q ) of the iterates (also in log scale).
Researcher Affiliation Academia Lior Danon Technion Israel Institute of Technology Haifa, Israel 3200003 liordanon@campus.technion.ac.il Dan Garber Technion Israel Institute of Technology Haifa, Israel 3200003 dangar@technion.ac.il
Pseudocode Yes Algorithm 1 Frank-Wolfe variants for approximating Tyler s M-estimator
Open Source Code No The paper states in the internal checklist that code is included or details allow reproduction, but does not provide an explicit statement or link in the main text for open-source code of their methodology.
Open Datasets No The paper describes a data generation process ('Gaussian distribution with outliers contamination' and 'heavy-tailed multivariate t-distribution') and references [28] for the experiments, but does not explicitly state that a public dataset was used with concrete access information.
Dataset Splits No The paper does not explicitly provide training, validation, or test dataset splits. It describes data generation and then directly evaluates performance metrics.
Hardware Specification No The paper explicitly states 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]' in its checklist.
Software Dependencies No The paper mentions using 'Python s SCIPY.SPARSE.LINALG.EIGSH procedure' but does not provide specific version numbers for Python, SciPy, or any other software dependencies.
Experiment Setup Yes In both experiments we set p = 50, n = 2500, and we take the true unknown covariance to be a Toeplitz matrix with the elements Qi,j = 0.85|i j|. [...] All methods are initialized from the sample covariance (normalized to have trace equals p) [...] For all experiments we compute Tyler s estimator to high accuracy by running 250 iterations of the Fixed-point method and we denote the resulting matrix by Q .