Socialized Learning: Making Each Other Better Through Multi-Agent Collaboration

Authors: Xinjie Yao, Yu Wang, Pengfei Zhu, Wanyu Lin, Jialu Li, Weihao Li, Qinghua Hu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of multi-agent collaboration in an extensive empirical study.
Researcher Affiliation Academia 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Engineering Research Center of City Intelligence and Digital Governance, Ministry of Education of the People s Republic of China, Tianjin, China 3Haihe Lab of ITAI, Tianjin, China 4Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China 5Department of Computer Science, Boston University, Boston, United States.
Pseudocode Yes Algorithm 1 Training for MASC.
Open Source Code Yes Our code will be publicly available at https://github.com/yxjdarren/SL.
Open Datasets Yes We evaluate the performance on CIFAR10 (Krizhevsky et al., 2009) and CIFAR100 (Krizhevsky et al., 2009).
Dataset Splits Yes We use two versions of dataset split. The first one involves dividing CIFAR10 evenly among 5 agents, with each agent having 2 classes... while CIFAR100 is split evenly among 4 agents, each having 25 classes... Similarly, the second version is CIFAR10-2-5 and CIFAR100-5-20.
Hardware Specification Yes Our agent is deployed in Py Torch (Paszke et al., 2019) with an NVIDIA RTX 3090 GPU and trained with a batch size of 128 for 500 epochs
Software Dependencies Yes Our agent is deployed in Py Torch (Paszke et al., 2019)
Experiment Setup Yes Our agent is deployed in Py Torch (Paszke et al., 2019) with an NVIDIA RTX 3090 GPU and trained with a batch size of 128 for 500 epochs, and we use SGD with momentum for optimization. The learning rate is set to 0.005, the energy anchor is set to -20, λ1 is set to 1, and λ2 is set to 0.1.