Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation

Authors: Sergei Shumilin, Alexander Ryabov, Nikolay Yavich, Evgeny Burnaev, Vladimir Vanovskiy

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate performance of the designed algorithm on two PDEs: a linear parabolic equation which governs slightly compressible fluid flow in porous media and the wave equation. Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
Researcher Affiliation Academia 1Applied AI Center, Skolkovo Institute of Science and Technology, Moscow, Russia 2AIRI Institute, Moscow, Russia. Correspondence to: Sergei Shumilin <s.shumilin@skoltech.ru>, Alexander Ryabov <a.ryabov@skoltech.ru>, Nikolay Yavich <n.yavich@skoltech.ru>, Evgeny Burnaev <e.burnaev@skoltech.ru>, Vladimir Vanovskiy <v.vanovskiy@skoltech.ru>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available on Git Hub1. 1https://github.com/Sergei Shumilin/Differentiable Unstructured Grid Coarsening
Open Datasets No The paper uses internally generated synthetic data described as "synthetic permeability generated by the following function: sin(ax) + sin(by) + 2.5 where a = b = 0.05." and refers to a "real oil reservoir model" in Appendix D without providing access details.
Dataset Splits No The paper mentions splitting 'train and test time periods' for simulation results, as shown by the 'Red dotted line splits training and test periods' in Fig. 11. However, it does not provide specific dataset splits (e.g., percentages or counts) for distinct training, validation, and test datasets in the conventional sense.
Hardware Specification Yes The experiments were conducted on Google Colab, utilizing two cores of an Intel(R) Xeon(R) CPU @ 2.20GHz and 12.7 GB of RAM, to ensure a consistent and replicable environment.
Software Dependencies No The paper mentions using "Py Torch (Paszke et al., 2017)" and "Pytorch Geometric (Fey & Lenssen, 2019) framework" but does not specify their version numbers.
Experiment Setup Yes For this experiment we ve done optimization for 20 epochs and modelled 104 time steps. Adam optimizer. Learning rate 10-3.