Collective Online Learning of Gaussian Processes in Massive Multi-Agent Systems

Authors: Trong Nghia Hoang, Quang Minh Hoang, Kian Hsiang Low, Jonathan How7850-7857

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations show that COOL-GP is highly effective in model fusion, resilient to information disparity between agents, robust to transmission loss, and can scale to thousands of agents. This section empirically evaluates the fusion performance of our COOL-GP framework, its resilience to information disparity between agents, and robustness to transmission loss on both synthetic and real-world experimental domains.
Researcher Affiliation Collaboration 1MIT-IBM Watson AI Lab, 2Carnegie Mellon University 3National University of Singapore, 4Massachusetts Institute of Technology
Pseudocode No The paper describes algorithms and methods using mathematical equations and prose (e.g., in Section 3.1, 'Online Update of q(u I)' and 'Decentralized Message Passing for Multi-Agent Model Fusion') but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes The AIRLINE domain (Hensman, Fusi, and Lawrence 2013; Hoang, Hoang, and Low 2015) features an air transportation delay phenomenon that generates streaming data of size 600000 comprising 30000 batches of 20 observations each. The AIMPEAK domain (Hoang, Hoang, and Low 2016) features a traffic phenomenon which took place over an urban road network comprising 775 road segments.
Dataset Splits No The paper mentions 'streaming data' and 'separate test data' for its experiments but does not explicitly provide details about training, validation, and test dataset splits with percentages, absolute counts, or cross-validation setup.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only discusses computational time in general terms.
Software Dependencies No The paper does not provide specific software dependencies or library names with version numbers that were used to implement and run the experiments.
Experiment Setup Yes In all experiments, each data batch arrives sequentially in a random order and is dispatched to a random agent. Fig. 1 reports the results of our COOL-GP framework in a collective online learning scenario where two agents fuse their online sparse GP models of two correlated, synthetic phenomena to improve their averaged performance on test instances from their input localities. Fig. 2 further reports the performance of COOL-GP in a real-world traffic monitoring application deployed on a large, decentralized network consisting of 100 agents. Both of these cases demonstrate the effectiveness of COOL-GP fusion on the averaged predictive accuracy vs. varying numbers of streamed data batches for different numbers |I| of inducing inputs and M of projection matrix samples (Section 3.1).