Solving high-dimensional parabolic PDEs using the tensor train format

Authors: Lorenz Richter, Leon Sallandt, Nikolas Nüsken

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we consider some examples of highdimensional PDEs that have been addressed in recent articles and treat them as benchmark problems in order to compare against our algorithms with respect to approximation accuracy and computation time. We refer to Appendix C for implementation details and to Appendix D for additional experiments. In Table 1 we compare the explicit scheme stated in (10) with the implicit scheme from (11), once with TTs and once with NNs.
Researcher Affiliation Collaboration 1Freie Universit at Berlin, Germany 2BTU Cottbus-Senftenberg, Germany 3dida Datenschmiede Gmb H, Germany 4Technische Universit at Berlin, Germany 5Universit at Potsdam, Germany. Correspondence to: Lorenz Richter <lorenz.richter@fu-berlin.de>, Leon Sallandt <sallandt@math.tu-berlin.de>.
Pseudocode Yes Algorithm 1 simple ALS algorithm; Algorithm 2 PDE approximation
Open Source Code No The paper mentions external libraries like Xerus ('Huber, B. and Wolf, S. Xerus a general purpose tensor library. https://libxerus.org/, 2014 2017.'), but it does not provide an explicit statement or link to the source code for the methodology presented in this paper.
Open Datasets No The paper addresses solving PDEs for which reference solutions are generated via Monte Carlo approximation or other numerical methods (e.g., 'a reference solution is available: V (x, t) = log E h e g(x+ T tσξ)i', 'a reference solution is available, V (x, t) = log E h e g(XT ) Xt = x i'). It does not use or provide access to a predefined public dataset for its experiments.
Dataset Splits No The paper uses Monte Carlo simulations by generating K samples (e.g., 'K = 2000 samples') from the discretized SDE at each step for numerical approximation, rather than partitioning a predefined static dataset into training, validation, and test sets. Therefore, no explicit dataset split information for validation is provided.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory specifications used for running the experiments. It only mentions software libraries used.
Software Dependencies No The paper mentions using PyTorch and the Xerus library ('For the Neural Network part we use PyTorch (Paszke et al., 2019) and for the TT implementation we use the Xerus library (Huber & Wolf, 2017)'), but it does not specify version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes In our experiments we consider d = 100, T = 1, t = 0.01, x0 = (0, . . . , 0) and K = 2000 samples. ... For the tensor trains we try different polynomial degrees, and it turns out that choosing constant ansatz functions is the best choice, while fixing the rank to be 1. For the NNs we use a Dense Net like architecture with 4 hidden layers (all the details can be found in Appendices C and D).; We set the TT-rank to 2, use polynomial degree 3 and refer to Appendix D for further details on the TT and NN configurations.