Multi-Fidelity Covariance Estimation in the Log-Euclidean Geometry

Authors: Aimee Maurais, Terrence Alsup, Benjamin Peherstorfer, Youssef Marzouk

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations of our approach using data from physical applications (heat conduction, fluid dynamics) demonstrate more accurate metric learning and speedups of more than one order of magnitude compared to benchmarks.
Researcher Affiliation Academia 1Massachusetts Institute of Technology, Cambridge, MA USA 2Courant Institute of Mathematical Sciences, New York University, New York, NY USA. Correspondence to: Aimee Maurais <maurais@mit.edu>, Terrence Alsup <alsup@cims.nyu.edu>.
Pseudocode Yes The code shown in Figure 7 sketches an implementation of the LEMF estimator. It is important to note that it only requires numerical linear algebra functions that are readily available in, e.g., NumPy/SciPy (Harris et al., 2020; Virtanen et al., 2020).
Open Source Code Yes Code Availability Python implementations of the LEMF, EMF, and truncated EMF estimators are available at https://github.com/amaurais/LEMF.
Open Datasets No The paper describes synthetic data generation based on physical models (e.g., 'We model θ N(0, I4 4) and define high-fidelity samples as y(0) = G(0)(θ) and surrogate samples as y(1) = G(1)(θ)'), rather than providing concrete access information for a publicly available or open dataset.
Dataset Splits No The paper mentions pilot studies to estimate parameters for optimal sample allocation but does not describe conventional training/validation/test splits of a dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions computational aspects like 'number of grid points' or 'time-step'.
Software Dependencies No The paper mentions software libraries like 'Num Py', 'Sci Py', and 'pyqg', but does not provide specific version numbers for all of them. While SciPy is referenced with 'Sci Py 1.0', it is not explicitly listed as a dependency with its version within the main text.
Experiment Setup Yes For the truncated multi-fidelity estimator, we set a threshold of δ = 10 16 on the eigenvalues. We construct multifidelity estimators from a combination of 15 high-fidelity samples and an additional number nlo,i of low-fidelity samples computed according to the optimal ratio in Proposition 3.4. In constructing A we set t = 0.1, motivated by (Zadeh et al., 2016, Figure 3).