The Human Kernel

Authors: Andrew G. Wilson, Christoph Dann, Chris Lucas, Eric P. Xing

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We create new human function learning datasets, including novel function extrapolation problems and multiple-choice questions that explore human intuitions about simplicity and explanatory power, available at http://functionlearning.com/. We develop a statistical framework for kernel learning from the predictions of a model, conditioned on the (training) information that model is given. We exploit this framework to directly learn kernels from human responses, which contrasts with all prior work on human function learning, where one compares a fixed model to human responses. Further, we consider individual rather than averaged human extrapolations. We interpret the learned kernels to gain scientific insights into human inductive biases, including the ability to adapt to new information for function learning. We also use the learned human kernels to inspire new types of covariance functions which can enable extrapolation on problems which are difficult for conventional GP models. We study Occam s razor in human function learning, and compare to GP marginal likelihood based model selection, which we show is biased towards under-fitting. We provide an expressive quantitative means to compare existing machine learning algorithms with human learning, and a mechanism to directly infer human prior representations.
Researcher Affiliation Academia Andrew Gordon Wilson CMU Christoph Dann CMU Christopher G. Lucas University of Edinburgh Eric P. Xing CMU
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states that human function learning datasets are available at http://functionlearning.com/, but it does not provide an explicit statement or link for the open-source code of the methodology described in the paper.
Open Datasets Yes We create new human function learning datasets, including novel function extrapolation problems and multiple-choice questions that explore human intuitions about simplicity and explanatory power, available at http://functionlearning.com/.
Dataset Splits No The paper describes generating training data (e.g., "sample 20 datapoints y from a GP" and "two sets of 5 functions"), but it does not specify explicit dataset splits (e.g., percentages or exact counts for training, validation, and test sets) from a larger dataset to enable reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper discusses various models and kernels (e.g., Gaussian processes, spectral mixture kernels, RBF kernel) but does not provide specific software names with version numbers that would be required to replicate the experiments.
Experiment Setup Yes We sample 20 datapoints y from a GP with RBF kernel (the supplement describes GPs), k RBF(x, x ) = exp( 0.5||x x ||/ℓ2), at random input locations. Conditioned on these data, we then sample multiple posterior draws, y(1) , . . . , y(W ) , each containing 20 datapoints, from a GP with a spectral mixture kernel [14] with two components (the prediction kernel). ... To reconstruct the prediction kernel, we learn the parameters θ of a randomly initialized spectral mixture kernel with five components, by optimizing the predictive conditional marginal likelihood QW j=1 p(y(j) |y, kθ) wrt θ. ... All human participants were recruited using Amazon s mechanical turk and saw experimental materials provided at http://functionlearning.com. When we are considering stationary ground truth kernels, we use a spectral mixture for kernel learning; otherwise, we use a non-parametric empirical estimate.