Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Discovering Nonlinear PDEs from Scarce Data with Physics-encoded Learning
Authors: Chengping Rao, Pu Ren, Yang Liu, Hao Sun
ICLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our method on three nonlinear PDE systems. The effectiveness and superiority of the proposed method over baseline models are demonstrated. |
| Researcher Affiliation | Academia | Northeastern University EMAIL Hao Sun Renmin University of China EMAIL |
| Pseudocode | No | The paper describes its methodology using textual descriptions and figures, but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | The dataset and training script for each case considered in this paper can be found in https://github.com/Raocp/Discover-PDE-with-Noisy-Scarce-Data. |
| Open Datasets | Yes | The dataset and training script for each case considered in this paper can be found in https://github.com/Raocp/Discover-PDE-with-Noisy-Scarce-Data. |
| Dataset Splits | Yes | Among the entire measurement, 10% data is split as the validation dataset. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. It only mentions 'numerical simulations' and 'computer's memory limit' without specifying any hardware. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Runge-Kutta scheme' but does not provide specific version numbers for any software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | The learning rate is initialized to be 0.002 and decreases to 97% of the previous for every 200 iterations. Table C.1: 'Range of hyperparameters for the data reconstruction network.' listing specific values for kernel size, # layers, # channels, learning rate, and regularizer weight. |