Graphical Multioutput Gaussian Process with Attention

Authors: Yijue Dai, Wenzhong Yan, Feng Yin

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results confirm that the proposed GMOGP significantly outperforms state-of-the-art MOGP alternatives in predictive performance, as well as in time and memory efficiency, across various synthetic and real datasets.
Researcher Affiliation Academia Yijue Dai, Wenzhong Yan & Feng Yin School of Science and Engineering The Chinese University of Hong Kong, Shenzhen, China yinfeng@cuhk.edu.cn, yijuedai@link.cuhk.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Blspdianna/GMOGP.
Open Datasets No The paper mentions various real-world datasets (JURA, ECG, EEG, SARCOS, KUKA, TRAFFIC) and a synthetic dataset, but it does not provide concrete access information such as specific links, DOIs, repository names, or formal citations with authors/year for public availability of these datasets. For synthetic data, it's generated by functions described in the paper itself.
Dataset Splits No The paper provides training and test split sizes for datasets (e.g., Ntrain and Ntest in Table 6, and 1200 training samples for synthetic data out of 1800 total, with 600 test points). However, it does not explicitly mention a separate validation set or split percentages for validation.
Hardware Specification No The paper mentions using "two distributed computing units" for larger datasets but does not specify any details about the hardware used (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes Table 4: The detailed learning parameters for competing models (e.g., noise, length-scale, output-scale, mean parameters listed). (Appendix B.1); In the experiments, we stack K = 3 or 4 layers of a composite Sinh-Archsinh flow with Affine flow (SAL), which can be formulated as below: f (i) KX = c(i) sinh b(i) arcsinh f (i) K 1X a(i) + d(i). (Appendix B.2)