Unlocking the Potential of Global Human Expertise

Authors: Elliot Meyerson, Olivier Francon, Darren Sargent, Babak Hodjat, Risto Miikkulainen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental RHEA is first illustrated through a formal synthetic example below, demonstrating how this process can result in improved decision-making. RHEA is then put to work in a large-scale international experiment on developing non-pharmaceutical interventions for the COVID-19 pandemic. The results show that broader and better policy strategies can be discovered in this manner, beyond those that would be available through AI or human experts alone.
Researcher Affiliation Collaboration Elliot Meyerson1 Olivier Francon1 Darren Sargent1 Babak Hodjat1 Risto Miikkulainen1,2 1Cognizant AI Labs 2The University of Texas at Austin
Pseudocode No The paper describes procedural steps and includes flow diagrams but does not contain formal pseudocode blocks or algorithms labeled as such.
Open Source Code Yes Code for the illustrative domain was implemented outside of the proprietary framework and can be found at https://github.com/cognizant-ai-labs/rhea-demo.
Open Datasets Yes The data collected from the XPRIZE Pandemic Response Challenge (in the Define and Gather phases) and used to distill models that were then Evolved can be found on AWS S3 at https://s3.us-west-2.amazonaws.com/covid-xprize-anon (i.e., in the public S3 bucket named covid-xprize-anon , so it is also accessible via the AWS command line).
Dataset Splits Yes resulting in 212,400 training samples for each prescriptor, a random 20% of which was used for validation for early stopping.
Hardware Specification Yes Each training run of RHEA for the Pandemic Response Challenge experiments takes 9 hours on a 16-core m5a.4xlarge EC2 instance.
Software Dependencies Yes Gaussian Kernel Density Estimation (KDE; Fig. 3d), using the scipy implementation with default parameters [75]. ... SciPy 1.0: fundamental algorithms for scientific computing in python.
Experiment Setup Yes Distilled models were implemented in Keras [7] and trained with Adam [35] using L1 loss (since policy actions were on an ordinal scale). ...Evolution from the distilled models was run for 100 generations in 10 independent trials to produce the final RHEA models. ...The population size was 200; in RHEA, 169 of the 200 random NNs in the initial population were replaced with distilled models.