Meta-Learning PAC-Bayes Priors in Model Averaging

Authors: Yimin Huang, Weiran Huang, Liang Li, Zhenguo Li4198-4205

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In practice, both methods perform well in simulations and real data studies, especially with poor quality data. Illustrative simulations including regression and classification tasks given in Section 4 show that our algorithms will lead to more effective prediction. We further apply the proposed methods to two real datasets and confirm the higher prediction accuracy of the minimizing risk bound method.
Researcher Affiliation Industry Yimin Huang, Weiran Huang, Liang Li, Zhenguo Li Huawei Noah s Ark Lab {yimin.huang, weiran.huang, liliang103, Li.Zhenguo}@huawei.com
Pseudocode Yes Algorithm 1 Historical Data Related Algorithm. Algorithm 2 Sequential Batch Sampling Algorithm.
Open Source Code No The paper does not provide an explicit statement about the release of its own source code, nor does it include a link to a code repository for the described methodology.
Open Datasets Yes The task environment is constructed based on augmentations of the MNIST dataset (Le Cun 1998). First, the BGS data is with small d and from the Berkeley Guidance Study (BGS) by Tuddenham and Snyder (1954).
Dataset Splits Yes where each sample set Si is divided into a training set Strain i and a validation set Svalidation i . For the meta-learner, it is trained by the meta-training tasks each with 50000 training samples and 10000 validation samples.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'R package SOIL' and 'Keras' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes All the specific settings for parameters are summarized in Table 1, and the confidence level δ in Lemma 1 is set to 0.01. For the meta-learner, it is trained by the meta-training tasks each with 50000 training samples and 10000 validation samples. For a new task with fewer training samples and 10000 test samples, we randomly sample 2000 training samples 20 times, and compare the average test error percentage of different learning methods.