Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Mixture Proportion Estimation Beyond Irreducibility

Authors: Yilun Zhu, Aaron Fjeldsted, Darren Holland, George Landon, Azaree Lintereur, Clayton Scott

ICML 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach empirically exhibits improved estimation performance relative to baseline methods and to a recently proposed regrouping-based algorithm.
Researcher Affiliation Academia 1Department of Electrical Engineering and Computer Science, University of Michigan. 2Ken and Mary Alice Lindquist Department of Nuclear Engineering, Penn State University. 3Department of Engineering Physics, Air Force Institute of Technology. 4School of Engineering and Computer Science, Cedarville University. 5Department of Statistics, University of Michigan.
Pseudocode Yes Algorithm 1 Subsampling MPE (Su MPE)
Open Source Code Yes The implementation is available at https://github.com/allan-z/Su MPE.
Open Datasets Yes We ran our algorithm on nuclear, synthetic and some benchmark datasets taken from the UCI machine learning repository and MNIST, corresponding to all three scenarios described in Sec. 4.2.
Dataset Splits No The paper describes how samples are drawn for the mixture proportion estimation problem (e.g., m = n = 50,000 instances), and how some benchmark datasets are adapted. However, it does not explicitly provide details about standard train/validation/test dataset splits (e.g., percentages, exact counts, or specific split files) typically used for model training and evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions training neural networks and using the MCNP radiation transport code, but it does not specify any software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes A 1 hidden layer neural network with 16 neurons was trained to predict P sr(Y = 1|X = x) for x ( , 2], thus A = ( , 2] and α(x) was chosen according to Eqn. (10).