Calculating Optimistic Likelihoods Using (Geodesically) Convex Optimization

Authors: Viet Anh Nguyen, Soroosh Shafieezadeh Abadeh, Man-Chung Yue, Daniel Kuhn, Wolfram Wiesemann

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase the advantages of working with optimistic likelihoods on a classification problem using synthetic as well as empirical data. We test our theoretical findings in the context of a classification problem, and we report on numerical experiments in Section 4.
Researcher Affiliation Academia Viet Anh Nguyen Soroosh Shafieezadeh-Abadeh École Polytechnique Fédérale de Lausanne, Switzerland {viet-anh.nguyen, soroosh.shafiee}@epfl.ch Man-Chung Yue The Hong Kong Polytechnic University, Hong Kong manchung.yue@polyu.edu.hk Daniel Kuhn École Polytechnique Fédérale de Lausanne, Switzerland daniel.kuhn@epfl.ch Wolfram Wiesemann Imperial College Business School, United Kingdom ww@imperial.ac.uk
Pseudocode Yes Algorithm 1 Projected Geodesic Gradient Descent Algorithm
Open Source Code Yes Our algorithm and all tests are implemented in Python, and the source code is available from https: //github.com/sorooshafiee/Optimistic_Likelihoods.
Open Datasets Yes We compare the performance of our flexible discriminant rules with standard QDA implementations from the literature on datasets from the UCI repository [4]. [4] K. Bache and M. Lichman. UCI machine learning repository, 2013. Available from http: //archive.ics.uci.edu/ml.
Dataset Splits Yes In each trial, we randomly select 75% of the data for training and the remaining 25% for testing. The size of the ambiguity set and the regularization parameter are selected using stratified 5-fold cross validation.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are mentioned in the paper.
Software Dependencies No Our algorithm and all tests are implemented in Python, and the source code is available from https: //github.com/sorooshafiee/Optimistic_Likelihoods. No specific version numbers for Python or any libraries are provided.
Experiment Setup Yes To study the empirical convergence behavior of Algorithm 1, for n {10, 20, . . . , 100} we generate 100 covariance matrices ˆΣ according to the following procedure. We (i) draw a standard normal random matrix B Rn n and compute A = B + B ; we (ii) conduct an eigenvalue decomposition A = RDRT ; we (iii) replace D with a random diagonal matrix ˆD whose diagonal elements are sampled uniformly from [1, 10]n; and we (iv) set ˆΣ = R ˆDR . For each of these covariance matrices, we set ˆµ = 0, M = 1, x M 1 x for a standard normal random vector x Rn and calculate the optimistic likelihood (6) for ρ = n/100. This choice of ρ ensures that the radius of the ambiguity set scales with n in the same way as the Frobenius norm. ... The size of the ambiguity set and the regularization parameter are selected using stratified 5-fold cross validation.