Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems

Authors: Chris Reinke, Mayalen Etcheverry, Pierre-Yves Oudeyer

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we formulate the problem of automated discovery of diverse self-organized patterns in such high-dimensional complex dynamical systems, as well as a framework for experimentation and evaluation. Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area. These algorithms combine intrinsicallymotivated goal exploration and unsupervised learning of goal space representations. Goal space representations describe the interesting features of patterns for which diverse variations should be discovered. In particular, we compare various approaches to define and learn goal space representations from the perspective of discovering diverse spatially localized patterns. Moreover, we introduce an extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns a goal representation using a deep auto-encoder, and the use of CPPN primitives for generating initialization parameters. We show that it is more efficient than several baselines and equally efficient as a system pre-trained on a hand-made database of patterns identified by human experts.
Researcher Affiliation Academia Chris Reinke , Mayalen Etcheverry , Pierre-Yves Oudeyer Flowers Team Inria, Univ. Bordeaux, Ensta Paris Tech (France) {chris.reinke,mayalen.etcheverry,pierre-yves.oudeyer}@inria.fr
Pseudocode Yes Algorithm 1: IMGEP-OGL ... Algorithm 2: IMGEP-HGS, IMGEP-RGS and IMGEP-PGL
Open Source Code Yes Source code and supplementary material at https://automated-discovery.github.io/
Open Datasets No The paper states using Lenia (Chan, 2019) as a testbed and generates patterns from its own experiments (e.g., "large dataset of 42500 Lenia patterns identified during the many experiments over all evaluated algorithms"). While a website is provided for "discovered patterns", it is not explicitly stated that the specific *training datasets* (e.g., the 558 patterns or the 42500 patterns used to train the VAEs) are publicly available with concrete access information. The reference to Chan (2019) describes the Lenia system, not necessarily a pre-existing public dataset *used for training* in this paper.
Dataset Splits Yes The training set consisted of 558 Lenia patterns: half were animals that have been manually identified by Chan (2019); the other half randomly generated with CPPNs (see Section 4.4). ... The β-VAE was trained on a large dataset of 42500 patterns (37500 as training set, 5000 as validation set) from the experiments of all algorithms and each of their 10 repetitions.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU models, memory) are provided for running the experiments.
Software Dependencies No The paper mentions using "neat-python2 package" and "pytorch default initialization", and "Adam optimizer (Kingma & Ba, 2014)". While the software is named, specific version numbers are not provided (e.g., pytorch version, exact neat-python version).
Experiment Setup Yes For each algorithm 10 independent repetitions of the exploration experiment were conducted. Each experiment consisted of N = 5000 exploration iterations. ... For IMGEP variants the first Ninit = 1000 iterations used random parameter sampling to initialize their histories H. ... Every K = 100 explorations the β-VAE model was trained for 40 epochs resulting in 2000 epochs in total... All β-VAEs were trained for 2000 epochs and initialized with pytorch default initialization. We used the Adam optimizer (Kingma & Ba, 2014) (lr = 1e 3, β1 = 0.9, β2 = 0.999, ϵ = 1e 8, weight decay=1e 5) with a batch size of 64.