Learning to Optimize Multigrid PDE Solvers

Authors: Daniel Greenfeld, Meirav Galun, Ronen Basri, Irad Yavneh, Ron Kimmel

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a broad class of 2D diffusion problems demonstrate improved convergence rates compared to the widely used Black-Box multigrid scheme, suggesting that our method successfully learned rules for constructing prolongation matrices.
Researcher Affiliation Academia 1Weizmann Institute of Science, Rehovot, Israel. 2Technion, Israel Institute of Technology, Haifa, Israel.
Pseudocode Yes Algorithm 1 Two-Grid Cycle
Open Source Code No The paper does not provide any specific links or explicit statements about the availability of its source code.
Open Datasets No The paper describes generating its own 'diffusion problems' by sampling coefficients from a log-normal distribution and constructing block-periodic problems. It does not refer to a pre-existing, publicly available dataset with concrete access information (link, DOI, or formal citation).
Dataset Splits No The paper describes training and testing procedures and instances, but does not explicitly mention or detail a 'validation' set or specific splits for hyperparameter tuning.
Hardware Specification No The paper mentions training neural networks and running experiments, implying the use of computational hardware, but it does not specify any particular CPU, GPU, or other hardware models.
Software Dependencies No The paper mentions using 'RELU activations' and 'Adam' optimizer but does not provide specific version numbers for any software libraries, frameworks, or languages used.
Experiment Setup Yes We train a residual network consisting of 100 fully-connected layers of width 100 with RELU activations. Training is performed in three stages. First, the network was trained for two epochs on 163840 diffusion problems with grid-size 16 16 composed of 8 8 doubly-periodic core blocks and with doubly periodic boundary conditions. The network was initialized using the scheme suggested in (Zhang et al., 2019). Throughout the training process, the optimizer used was Adam, with an initial learning rate drawn from 10 U([4,6]).