Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

CAX: Cellular Automata Accelerated in JAX

Authors: Maxence Faldor, Antoine Cully

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate CAX s performance and flexibility through a wide range of benchmarks and applications. From classic models like elementary cellular automata and Conway s Game of Life to advanced applications such as growing neural cellular automata and self-classifying MNIST digits, CAX speeds up simulations up to 2,000 times faster. Furthermore, we demonstrate CAX s potential to accelerate research by presenting a collection of three novel cellular automata experiments
Researcher Affiliation Academia Maxence Faldor Department of Computing Imperial College London London, United Kingdom EMAIL Antoine Cully Department of Computing Imperial College London London, United Kingdom EMAIL
Pseudocode Yes @nnx.jit def step(self, state: State, input: Input | None = None) -> State:"""Perform a single step of the CA.state: Current state. input: Optional input.Updated state.""" perception = self.perceive(state) state = self.update(state, perception, input) return state
Open Source Code Yes In response to these challenges and opportunities, we present CAX: Cellular Automata Accelerated in JAX, an open-source library with cutting-edge performance, designed to provide a flexible and efficient framework for cellular automata research. CAX is built on JAX (Bradbury et al., 2018), a high-performance numerical computing library, enabling to speed up cellular automata simulations through massive parallelization across various hardware accelerators such as CPUs, GPUs, and TPUs.
Open Datasets Yes Self-classifying MNIST Digits Randazzo et al. (2020) Neural 2D. In this experiment, we train a one-dimensional NCA on the 1D-ARC dataset (Xu et al., 2024). The 1D-ARC dataset is a novel adaptation of the original Abstraction and Reasoning Corpus (Chollet, 2019) (ARC), designed to simplify and streamline research in artificial intelligence and language models.
Dataset Splits Yes The primary goal of this experiment is for the NCA to learn a generalizable rule from the training set, enabling it to solve unseen examples in the test sets. This challenge tests the NCA s ability to infer abstract patterns and apply them to new situations, a key aspect of human-like reasoning. To evaluate the NCA s performance, we compare it to GPT-4, a state-of-the-art language model, on the 1D-ARC test set.
Hardware Specification Yes Our benchmarks, conducted on a single NVIDIA RTX A6000 GPU, demonstrate significant performance gains across various cellular automata models.
Software Dependencies No The paper mentions software libraries like JAX (Bradbury et al., 2018) and Flax (Heek et al., 2024) but does not provide specific version numbers for these or any other key software components used in the experiments. For example: "The recent surge in artificial intelligence has increased the availability of computational resources, and encouraged the development of sophisticated tools such as JAX (Bradbury et al., 2018), a highperformance numerical computing library with automatic differentiation and JIT compilation. A rich ecosystem of specialized libraries has emerged around JAX, such as Flax (Heek et al., 2024) for neural networks"
Experiment Setup Yes A HYPERPARAMETERS This appendix provides detailed hyperparameters for the three novel neural cellular automata (NCA) experiments introduced in Section 5. These hyperparameters govern the architecture, training process, and simulation dynamics of each experiment. For further details on the experimental setup, refer to the respective subsections in Section 5 or to the notebooks on https://github.com/maxencefaldor/cax. Table 3: Hyperparameters for Diffusing Neural Cellular Automata (see Section 5.1) Parameter Value Spatial dimensions (72, 72) Channel size 64 Number of kernels 3 Hidden size 256 Cell dropout rate 0.5 Batch size 8 Number of steps 64 Learning rate 0.001