Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation
Authors: Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These formal results are supported by simulations on synthetic data sets (Section 5)... In the following, we investigate our findings regarding the empirical loss minimiser (ELM) (see Definition 1) in a simulation study on synthetic data. |
| Researcher Affiliation | Academia | Viktor Bengsa, Eyke H ullermeiera,b a Institute of Informatics, University of Munich (LMU) b Munich Center for Machine Learning viktor.bengs@lmu.de, eyke@lmu.de Willem Waegeman Department of Data Analysis and Mathematical Modeling Ghent University Willem.Waegeman@UGent.be |
| Pseudocode | No | No section or figure in the paper is explicitly labeled 'Pseudocode' or 'Algorithm', nor are there any structured code-like blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper states, 'For each scenario, we generate repeatedly observations of different sizes N...' indicating the use of synthetic data generated by the authors, without providing access details or citing a publicly available dataset. |
| Dataset Splits | No | The paper conducts a 'simulation study on synthetic data' and mentions generating 'observations of different sizes N', but it does not specify any train/validation/test dataset splits or cross-validation setup for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or specific computing environments) used to run the simulations or experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies (e.g., library names with version numbers) needed to replicate the experiments. |
| Experiment Setup | No | The paper mentions varying 'different values of λ' and 'different sizes N' in its simulation study, which are parameters of the loss function and data generation, respectively. However, it does not provide detailed experimental setup for model training such as hyperparameters (learning rate, batch size) or specific optimizer settings. |