Reinforcement Symbolic Regression Machine
Authors: Yilong Xu, Yang Liu, Hao Sun
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test the performance of our method on multiple different datasets and compare it with the following baseline models in symbolic learning: |
| Researcher Affiliation | Academia | Yilong Xu1, Yang Liu2, Hao Sun1, 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China; 2School of Engineering Science, University of Chinese Academy of Sciences, Beijing, China; |
| Pseudocode | Yes | Algorithm 1 Expression generation by RSRM |
| Open Source Code | Yes | Code and models of Reinforcement Symbolic Regression Machine(RSRM) are available at https://github.com/intell-sci-comput/RSRM. |
| Open Datasets | Yes | To evaluate the efficiency of our model, we first utilize four basic benchmark datasets (see Appendix C for details): Nguyen (Uy et al., 2011), Nguyenc (Mc Dermott et al., 2012), R (Mundhenk et al., 2021b), Livermore (Mundhenk et al., 2021b), and AIFeynman (Udrescu & Tegmark, 2020). |
| Dataset Splits | Yes | Each dataset is divided into three subsets: a training set, a test set, and a validation set. The training set comprises points ranging from 30 to 80, while the test set consists of points ranging from 10 to 25. The validation set covers a broader range, spanning from 0 to 100. |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models, processors, or memory) used for running the experiments were provided. |
| Software Dependencies | No | We treat each placeholder as an unknown variable, which is optimized to maximize the reward. The BFGS (Roger Fletcher & Sons, 2013) algorithm, available in the scipy (Virtanen et al., 2020) module in Python, is used for optimization. No specific version numbers for Python, SciPy, or DEAP are provided. |
| Experiment Setup | Yes | The full set of hyperparameters can be seen in Table S1. |