Learning Algebraic Multigrid Using Graph Neural Networks
Authors: Ilay Luz, Meirav Galun, Haggai Maron, Ronen Basri, Irad Yavneh
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on a broad class of problems demonstrate improved convergence rates compared to classical AMG, demonstrating the potential utility of neural networks for developing sparse system solvers. |
| Researcher Affiliation | Collaboration | 1Weizmann Institute of Science, Rehovot, Israel. 2NVIDIA Research 3Technion, Israel Institute of Technology, Haifa, Israel. |
| Pseudocode | Yes | Algorithm 1 Two-Level Algorithm |
| Open Source Code | Yes | Code for reproducing experiments is available at https://github.com/ilayluz/learning-amg. |
| Open Datasets | No | The training data are comprised of block-circulant graph Laplacian matrices, composed of 4x4 blocks with 64 points in each block, yielding 1024 variables. ... To this end, we sample points uniformly on the unit square, and compute a Delaunay triangulation. |
| Dataset Splits | No | The paper describes generating training data and evaluating performance but does not specify explicit train/validation/test dataset splits by percentage or sample count. |
| Hardware Specification | Yes | All experiments were conducted using the Tensor Flow framework (Abadi et al., 2016) using NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions 'Tensor Flow framework' and 'Adam optimizer' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We use a batch size of 32 and employ the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3x10^-3. |