Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
MetaSymNet: A Tree-like Symbol Network with Adaptive Architecture and Activation Functions
Authors: Yanjie Li, Weijun Li, Lina Yu, Min Wu, Jingyi Liu, Shu Wei, Yusong Deng, Meilan Hao
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the performance of Meta Sym Net and five baseline algorithms, we conducted experiments across more than ten datasets, including SRBench. The experimental results show that Meta Sym Net has achieved relatively excellent results on various evaluation metrics. |
| Researcher Affiliation | Academia | 1Ann Lab, Institute of Semiconductor, Chinese Academy of Sciences, Beijing, China 2 School of Electronic, Electrical, and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China 3 Zhongguancun Academy, Beijing, China 4School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing China EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Meta Sym Net s algorithm schematic is shown in Figure 1. See Appendix 1 for the pseudocode |
| Open Source Code | Yes | Code https://github.com/1716757342/Met Sym Net |
| Open Datasets | Yes | To evaluate the performance of Meta Sym Net and five baseline algorithms, we conducted experiments across more than ten datasets, including SRBench. |
| Dataset Splits | No | The paper refers to using "ten public datasets" and "SRBench" for evaluation, and describes sampling data points "on the same interval for different algorithms" and running trials "20 times at varying noise levels." However, it does not provide specific train/validation/test splits (e.g., percentages, sample counts, or references to predefined splits) for these datasets as required by the question. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It focuses on the algorithmic aspects and results. |
| Software Dependencies | No | The paper mentions numerical optimization algorithms like SGD, BFGS, and L-BFGS, along with their respective citations, but it does not list any specific software libraries or packages with version numbers (e.g., Python 3.x, PyTorch 1.x) that were used for the implementation of Meta Sym Net. |
| Experiment Setup | Yes | The maximum length is set to 80 for all algorithms. We incorporate an entropy loss metric for each set of selected parameters into the loss function. This integration aims to augment Meta Sym Net s training efficiency and precision. λ is the entropy loss regulation coefficient. If the target expression is still not found, or R2 is less than 0.9999. This process is repeated until a stopping condition is reached. |