Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery
Authors: Yue Yu, Ning Liu, Fei Lu, Tian Gao, Siavash Jafarzadeh, Stewart A Silling
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on zero-shot learning to new and unseen physical systems, demonstrating the generalizability of NAO in both forward and inverse PDE problems. |
| Researcher Affiliation | Collaboration | Yue Yu Department of Mathematics, Lehigh University, Bethlehem, PA 18015, USA... Ning Liu Global Engineering and Materials, Inc., Princeton, NJ 08540, USA... Fei Lu Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218, USA... Tian Gao IBM Research, Yorktown Heights, NY 10598, USA... Siavash Jafarzadeh Department of Mathematics, Lehigh University, Bethlehem, PA 18015, USA... Stewart Silling Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87123, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Our code and data accompanying this paper are available at https://github.com/fishmoon1234/NAO. |
| Open Datasets | Yes | Our code and data accompanying this paper are available at https://github.com/fishmoon1234/NAO. and To generate the training data, we consider 7 sine-type kernels... We generate 4530 data pairs (gη[u], f η)... |
| Dataset Splits | No | The paper discusses training and testing data splits (e.g., "9000 for training and 1000 for testing" in C.2, "45000 are used for training and 5000 for testing" in C.3) but does not explicitly mention or specify a validation set or its split percentage. |
| Hardware Specification | Yes | Experiments are conducted on a single NVIDIA Ge Force RTX 3090 GPU with 24 GB memory. |
| Software Dependencies | No | The paper mentions the use of "FEniCS finite element package" (Section C.3) but does not provide specific version numbers for this or any other key software components like programming languages or deep learning frameworks (e.g., Python, PyTorch). |
| Experiment Setup | Yes | In all experiments, the optimization is performed with the Adam optimizer. To conduct fair comparison across different methods, we tune the hyperparameters, including the learning rates, the decay rates, and the regularization parameters, to minimize the training loss. In all examples, we use 3-layer models, and parameterize the kernel network W P,u and W P,f with a 3-layer MLP with hidden dimensions (32, 64) and Leaky Re LU activation. |