Credal Learning Theory

Authors: Michele Caprio, Maryam Sultana, Eleni Elia, Fabio Cuzzolin

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform synthetic experiments to show that the bounds we find in Theorems 4.1 and 4.5 are indeed tighter than the classical SLT ones reported in Corollaries 4.3 and 4.7, respectively.
Researcher Affiliation Academia Michele Caprio Department of Computer Science University of Manchester, Manchester, UK michele.caprio@manchester.ac.uk Maryam Sultana Eleni G. Elia Fabio Cuzzolin School of Engineering Computing & Mathematics Oxford Brookes University, Oxford, UK {msultana,eelia,fabio.cuzzolin}@brookes.ac.uk
Pseudocode No The paper describes theoretical concepts and mathematical derivations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured, code-like steps for any procedure.
Open Source Code No Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: The experiments are synthetic and extremely easy to reproduce. Also, the experiments do not require any special libraries or large-scale real-world datasets.
Open Datasets Yes Experiment 1: Let the available training sets be D1, D2, D3. Assume, for simplicity, that Ω= {x} R R. Suppose that we specified the likelihood pdfs ℓ1 = N( 5, 1), ℓ2 = N(0, 1), and ℓ3 = N(5, 1). Experiment 2: We selected a standard Gaussian distribution N(0, 1) (mean 0, standard deviation 1) to generate data. Experiment 3: We generate synthetic data from Gaussian distributions (with the same parameters as in Experiment 1), with added uniform noise to ensure no realizability.
Dataset Splits No The paper explicitly mentions training and test sets but does not specify a distinct validation set split or how it was used in any of the synthetic experiments.
Hardware Specification No For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [No] Justification: There is no need to discuss the allocation of computer resources, as the synthetic experiments we conducted can be performed on any standard computer.
Software Dependencies No Does the paper provide SPECIFIC ANCILLARY SOFTWARE DETAILS (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment? Answer: [No] Justification: The paper does not specify any particular software libraries or tools with version numbers for its synthetic experiments, as they are described as not requiring any special libraries.
Experiment Setup Yes Experiment 1: Let the available training sets be D1, D2, D3. Assume, for simplicity, that Ω= {x} R R. Suppose that we specified the likelihood pdfs ℓ1 = N( 5, 1), ℓ2 = N(0, 1), and ℓ3 = N(5, 1). Table B.1: Here the hypotheses space is such that |H| = 100, and δ = 0.05. Table B.4: Here the hypotheses space is such that |H| = 100, δ = 0.05 and noise level 0.1.