Deep Generative Symbolic Regression with Monte-Carlo-Tree-Search
Authors: Pierre-Alexandre Kamienny, Guillaume Lample, Sylvain Lamprier, Marco Virgolin
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present the results of DGSR-MCTS. We begin by studying the performance on test synthetic datasets. Then, we present results on the SRBench datasets. |
| Researcher Affiliation | Collaboration | 1Meta AI, Paris, France 2ISIR MLIA, Sorbonne Universit e, France 3LERIA, Universit e d Angers, France 4Centrum Wiskunde & Informatica, the Netherlands. |
| Pseudocode | No | The paper describes its MCTS process in detail but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for its methodology is openly available. |
| Open Datasets | Yes | We evaluate DGSR-MCTS on the regression datasets of the SRBench benchmark (La Cava et al., 2021) |
| Dataset Splits | Yes | Each dataset is split into 75% training data and 25% test data using sampling with a random seed (we use 3 seeds per dataset, giving a total of 528 datasets). |
| Hardware Specification | No | The paper mentions "using 4 trainers (1 GPU/CPU each), 4 MCTS workers (1 GPU/CPU each)" but does not specify the exact models of GPUs or CPUs used. |
| Software Dependencies | No | The paper mentions the use of 'Sym Py' and 'Broyden Fletcher Goldfarb Shanno algorithm (BFGS)' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The specific decoding parameters and the distribution utilized are as follows: Number of samples K per expansion. Distribution: uniform on range [8,16]. Temperature used for decoding. Distribution: uniform on range [0.5, 1.0]. Length penalty: length penalty used for decoding. Distribution: uniform on range [0, 1.2]. Depth penalty: an exponential value decay during the backup-phase, decaying with depth to favor breadth or depth. Distribution: uniform on discrete values [0.8, 0.9, 0.95, 1]. Exploration: the exploration constant puct. 1 |