Symbolic Regression with a Learned Concept Library
Authors: Arya Grayeli, Atharva Sehgal, Omar Costilla Reyes, Miles Cranmer, Swarat Chaudhuri
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally compare LASR on Feynman Equations... We validate LASR on the Feynman equations... On these benchmarks, LASR substantially outperforms a variety of state-of-the-art SR approaches based on deep learning and evolutionary algorithms. |
| Researcher Affiliation | Collaboration | Arya Grayeli UT Austin, Foundry Technologies Atharva Sehgal UT Austin Omar Costilla-Reyes MIT Miles Cranmer University of Cambridge Swarat Chaudhuri UT Austin |
| Pseudocode | Yes | The full LASR algorithm is presented in Algorithm 1 and visualized in Figure 2. |
| Open Source Code | Yes | Artifacts available at https://trishullab.github.io/lasr-web (Footnote 2) and 'We provide a link to an open source implementation of LASR in Julia.' (Checklist item 4). |
| Open Datasets | Yes | We evaluate LASR on the Feynman Equations dataset... and Big Bench dataset [16]. |
| Dataset Splits | Yes | We fit the free parameters of each equation on the training set (43, 049 samples) and measure the MSE loss between the actual grade and the predicted grade on the validation set (10, 763 samples). |
| Hardware Specification | Yes | We run all experiments on a server node with 8x A100 GPUs with 80 GB of VRAM each. |
| Software Dependencies | Yes | We instantiate LASR using gpt-3.5-turbo-0125 [4] as the backbone LLM... and llama3-8b [17]... We chose to run llama3-8b using v LLM [25]. |
| Experiment Setup | Yes | We instantiate LASR using gpt-3.5-turbo-0125 [4] as the backbone LLM and calling it with p = 0.01 for 40 iterations. (Section 4.1 Setup)... Figure 7 showcases the hyperparameters used for all our experiments. |