Globally Optimal Learning for Structured Elliptical Losses

Authors: Yoav Wald, Nofar Noy, Gal Elidan, Ami Wiesel

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate the empirical appeal of using these losses for regression on synthetic and real-life data.
Researcher Affiliation Collaboration Yoav Wald Hebrew University yoav.wald@mail.huji.ac.il Nofar Noy Hebrew University nofar.noy@mail.huji.ac.il Ami Wiesel Google Research and Hebrew University awiesel@google.com Gal Elidan Google Research and Hebrew University elidan@google.com
Pseudocode Yes Algorithm 1 Minimization Majorization for Elliptical Markov Random Fields
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the methodology described.
Open Datasets Yes Instances of {zi}m i=1 are drawn from multivariate Generalized Gaussian distributions [22]
Dataset Splits Yes We use data on the years between 2004 and mid-2011 (excluding the mid-2007 to mid-2009 financial crisis) as training data and test over the values from then until 2015.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions using 'sklearn function make_sparse_psd [23]' but does not provide a specific version number for scikit-learn or any other software dependency.
Experiment Setup No The paper describes the setup for synthetic and real-life experiments, including data sources and tasks, but it does not provide specific details such as hyperparameters, learning rates, or batch sizes.