Source-Target Similarity Modelings for Multi-Source Transfer Gaussian Process Regression
Authors: Pengfei Wei, Ramon Sagarna, Yiping Ke, Yew-Soon Ong, Chi-Keong Goh
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on one synthetic and two realworld datasets, with learning settings of up to 11 sources for the latter, demonstrate the effectiveness of our proposed TCMSStack. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Rolls-Royce@Nanyang Technological University Corporate Lab 3Rolls-Royce Advanced Technology Centre, Singapore. |
| Pseudocode | No | The paper describes the methods mathematically and textually but does not include any labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide a statement about releasing source code for the methodology or any links to a code repository. |
| Open Datasets | Yes | Amazon reviews. We extract the raw data containing 15 product reviews from (Mc Auley et al., 2015); UJIIndoor Loc. The building location dataset covers three buildings of Universitat Jaume I with four floors each (Torres-Sospedra et al., 2014). |
| Dataset Splits | No | For each source domain we sample 500 points uniformly at random for training. Likewise, train and test data from each target domain are obtained by sampling 25 points and 1000 points, respectively. No explicit validation split is mentioned. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The hyperparameters of each method are optimized using the conjugate gradient implementation from the gpml package (Rasmussen & Nickisch, 2010). A specific version number for 'gpml package' is not provided. |
| Experiment Setup | Yes | The hyperparameters of each method are optimized using the conjugate gradient implementation from the gpml package (Rasmussen & Nickisch, 2010). For each search, we allow a maximum of 200 evaluations. We set α = 0.01 which is the best approximation stated in (Yong, 2015). |