DATS: Difficulty-Aware Task Sampler for Meta-Learning Physics-Informed Neural Networks

Authors: Maryam Toloubidokhti, Yubo Ye, Ryan Missel, Xiajun Jiang, Nilesh Kumar, Ruby Shrestha, Linwei Wang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated DATS against uniform and self-paced task-sampling baselines on two representative meta-PINN models, across five benchmark PDEs as well as three different residual point sampling strategies. The results demonstrated that DATS was able to improve the accuracy of meta-learned PINN solutions when reducing performance disparity across PDE configurations, at only a fraction of residual sampling budgets required by its baselines.
Researcher Affiliation Academia 1Rochester Institute of Technology, Rochester, NY, USA 2Zhejiang University, Hangzhou, China {mt6129}@rit.edu, {22230131}@zju.edu.cn
Pseudocode No The paper describes algorithms and derivations but does not include explicit pseudocode blocks or algorithm boxes.
Open Source Code Yes Source code available at https://github.com/maryamTolou/DATS_ICLR2024.
Open Datasets No The paper uses benchmark PDE equations and defines training and test configurations for these PDEs, but it does not provide a direct link or specific access information for a static, publicly available dataset file.
Dataset Splits Yes Table 2: The range and number of PDE configurations considered in each PDE benchmark. PDE Configuration #Training #Test... Burger ... 14 6... Convection ... 5 3... Reaction Diffusion ... 9 4... Helmholtz (2D) ... 9 4.
Hardware Specification Yes Experiments were run on NVIDIA Tesla T4s with 16 GB memory.
Software Dependencies No The paper mentions optimizers like ADAM and general network architectures, but it does not specify software versions (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Appendix B.1 BURGERS EQUATION: Fully Connected Layers Number of Layers: 7 Hidden layers dimenstion: 8 ... Optimizer: ADAM Learning rate: 1e-4 with cosine annealing Epochs: 20000.