Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Operator learning without the adjoint
Authors: Nicolas Boullé, Diana Halikias, Samuel E. Otto, Alex Townsend
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform numerical experiments showing that the convergence rate of our approximation is in close agreement with the theoretical predictions. Finally, we analyze our bound for solution operators of elliptic PDEs perturbed away from self-adjointness by lower-order terms. The linear degradation in performance with increasing non-self-adjointness predicted by our analysis is in agreement with deep learning experiments performed using standard operator learning methods. We perform a deep learning experiment to approximate Green s function associated with the one-dimensional stationary convection-diffusion equation with homogeneous Dirichlet boundary conditions on Ω= [0, 1]: |
| Researcher Affiliation | Academia | Nicolas Boull e EMAIL Department of Mathematics Imperial College London London, SW7 2AZ, UK Diana Halikias EMAIL Department of Mathematics Cornell University Ithaca, NY 14853, USA Samuel E. Otto EMAIL Sibley School of Mechanical and Aerospace Engineering Cornell University Ithaca, NY 14853, USA Alex Townsend EMAIL Department of Mathematics Cornell University Ithaca, NY 14853, USA |
| Pseudocode | Yes | Algorithm 1 Adjoint-free approximation algorithm Algorithm 2 Estimation of LA Algorithm 3 Extension-based adjoint-free approximation algorithm |
| Open Source Code | Yes | Codes and data supporting this paper are publicly available on Git Hub at https://github. com/NBoulle/Operator Learning Adjoint. |
| Open Datasets | Yes | Codes and data supporting this paper are publicly available on Git Hub at https://github. com/NBoulle/Operator Learning Adjoint. |
| Dataset Splits | No | The paper states that 25 training pairs were sampled and evaluated at different resolutions, but does not provide explicit split percentages or counts for training, validation, and test sets. For example: "To do this, we approximate the Green s function associated with Eq. (3) using 25 training pairs sampled on a grid with resolution s = 200 and report the relative error when evaluating the Green s function at different resolutions from s = 10 to s = 400." |
| Hardware Specification | No | The paper does not mention any specific hardware used for running its experiments, such as GPU or CPU models, or cloud computing specifications. |
| Software Dependencies | No | The paper mentions several software systems and libraries, including "Chebfun software system (Driscoll et al., 2014)", "Tensor Flow library (Abadi et al., 2015)", "Adam (Kingma and Ba, 2015) and L-BFGS (Byrd et al., 1995) optimization algorithms", "Firedrake finite element software (Rathgeber et al., 2016)", and "Scalable Library for Eigenvalue Problem Computations (SLEPc) (Hernandez et al., 2005)". However, it does not provide specific version numbers for any of these. |
| Experiment Setup | No | The paper describes the data generation process (e.g., "25 random functions from a Gaussian process with squared-exponential kernel and length-scale parameter ℓ= 0.03", "sampled on a uniform grid with 200 points"), the optimization algorithms used ("Adam" and "L-BFGS"), and parameters for the PDE (e.g., "convection parameters c = 0, c = 5, and c = 10"). However, it does not provide specific hyperparameters for the neural network training, such as learning rates, batch sizes, or the number of epochs for the Adam or L-BFGS optimizers. |