Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement Learning
Authors: Ali Taghibakhshi, Scott MacLachlan, Luke Olson, Matthew West
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that this method can produce better coarse graphs than existing algorithms, even as the graph size increases and other properties of the graph are varied. |
| Researcher Affiliation | Academia | Ali Taghibakhshi Mechanical Science and Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801, USA alit2@illinois.edu Scott Mac Lachlan Mathematics and Statistics Memorial University of Newfoundland and Labrador St. John s, NL, Canada smaclachlan@mun.ca Luke Olson Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801, USA lukeo@illinois.edu Matthew West Mechanical Science and Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801, USA mwest@illinois.edu |
| Pseudocode | Yes | Algorithm 1 Two-Level AMG Algorithm; Algorithm 2 Evaluation Algorithm |
| Open Source Code | Yes | All code and data for this paper is at https://github.com/compdyn/rl_grid_coarsen (MIT licensed). |
| Open Datasets | Yes | Each episode consists of a different random convex triangular mesh in 2D with approximately 30 to 120 nodes. Meshes are generated by selecting a number of uniformly random points in 2D, taking their convex hull, and meshing the interior using pygmsh [39] (see Figure 1 as an example). All code and data for this paper is at https://github.com/compdyn/rl_grid_coarsen (MIT licensed). |
| Dataset Splits | No | The paper describes the generation of training meshes and the use of test grids, but it does not specify explicit dataset splits (e.g., percentages or counts) for training, validation, and testing that would allow for direct reproducibility of the data partitioning. |
| Hardware Specification | Yes | All computation was performed using CPU-only on an 8-core i9 Mac Book Pro. |
| Software Dependencies | No | The code 1 was implemented using Py Torch Geometric [40], Py AMG [41], and Network X [42]. The paper mentions these software packages but does not provide specific version numbers for them. |
| Experiment Setup | Yes | We train for 10k episodes of DDQN with learning rate of 10 3, replay buffer of size 5 103, and batch size of 32. At the beginning of the training, the exploration ϵ value in the DDQN algorithm is set to 1 and decayed at rate of 1.25 10 4 to a minimum value of 10 2. The target network weights are updated after each 4 episodes, and the frozen network is replaced by the target network after every 10 learning steps. All tests, we used the agent trained following Section 4.2 and evaluated using Algorithm 2 with mean Lloyd aggregate size of 400 nodes and 1000 Lloyd cycles. |