Human-in-the-loop Active Covariance Learning for Improving Prediction in Small Data Sets
Authors: Homayun Afrabandpey, Tomi Peltola, Samuel Kaski
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate improvement in predictive performance on both simulated and real data, in high-dimensional linear regression tasks, where we learn the covariance structure with a Gaussian process, based on sequential elicitation. |
| Researcher Affiliation | Academia | Homayun Afrabandpey , Tomi Peltola and Samuel Kaski Helsinki Institute for Information Technology HIIT Department of Computer Science, Aalto University {homayun.afrabandpey, tomi.peltola, samuel.kaski}@aalto.fi |
| Pseudocode | No | The paper describes algorithms conceptually but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Codes and data are available in https://github.com/homayunafra/Human-in-the-loop-Active Covariance-Learning-for-Improving-Prediction-in-Small-Data-Sets |
| Open Datasets | Yes | For Amazon, we use the kitchen appliances subset which contains 5149 reviews [Blitzer et al., 2007]. |
| Dataset Splits | Yes | Among the 1000 reviews for the model, 10% were randomly selected for training and the remaining 90% for testing. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. It only mentions 'implemented computation of the utilities in parallel' in Section 3.3. |
| Software Dependencies | No | The paper states that the model was implemented 'in the probabilistic programming language Stan [Carpenter et al., 2016]', but it does not specify a version number for Stan or any other software dependency. |
| Experiment Setup | Yes | The hyperparameters of the model are aσ = 2 and bσ = 7 for σ2 and aτ = 2 and bτ = 4 for τ 2... The prior for γ is set to N + (1, 0.5)... The hyperparameters of the threshold variable are µξ = 20 and σ2 ξ = 10. For real data, we set τ 2 = 0.01, obtained by cross-validation. |