Solving Poisson Equations using Neural Walk-on-Spheres
Authors: Hong Chul Nam, Julius Berner, Anima Anandkumar
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In several challenging, high-dimensional numerical examples, we demonstrate the superiority of NWo S in accuracy, speed, and computational costs. Compared to commonly used PINNs, our approach can reduce memory usage and errors by orders of magnitude. Furthermore, we apply NWo S to problems in PDEconstrained optimization and molecular dynamics to show its efficiency in practical applications. |
| Researcher Affiliation | Academia | 1ETH Zurich 2Caltech. Correspondence to: Hong Chul Nam <honam@student.ethz.ch>, Julius Berner <jberner@caltech.edu>. |
| Pseudocode | Yes | Algorithm 1 Training of vanilla NWo S method |
| Open Source Code | Yes | Our Py Torch code can be found at https://github. com/bizoffermark/neural_wos. |
| Open Datasets | No | The paper defines mathematical problems (Laplace Equation, Poisson Equation, Committor Function) with analytical solutions for evaluation, rather than using external publicly available datasets with access information. |
| Dataset Splits | No | The paper describes sampling points in the domain and on the boundary for training and evaluating on unseen points via MC integration, but it does not specify explicit train/validation/test dataset splits with percentages or counts. |
| Hardware Specification | Yes | The experiments have been conducted on A100 GPUs. |
| Software Dependencies | No | The paper states 'We implemented all methods in Py Torch' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For all our training, we use the Adam optimizer and limit the runtime to 25d + 750 seconds for a fair comparison... We employ an exponentially decaying learning rate... We choose a feedforward neural network with residual connections, 6 layers, a width of 256, and a GELU activation function. We also perform the grid search for the boundary loss penalty term, i.e., β {0.5, 1, 5, 50, 100, 500, 1000, 5000}. We further include the batch size m {2i}17 i=7 in our grid search. |