Transform Once: Efficient Operator Learning in Frequency Domain
Authors: Michael Poli, Stefano Massaroli, Federico Berto, Jinkyoo Park, Tri Dao, Christopher Ré, Stefano Ermon
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on learning the solution operator of spatio-temporal dynamics, including incompressible Navier-Stokes, turbulent flows around airfoils and highresolution video of smoke. |
| Researcher Affiliation | Academia | Michael Poli Stanford University Diffeq ML Stefano Massaroli Mila Diffeq ML Federico Berto KAIST Diffeq ML Jinykoo Park KAIST Tri Dao Stanford University Christopher Ré Stanford University Stefano Ermon Stanford University CZ Biohub |
| Pseudocode | No | The paper describes mathematical formulations and derivations, but does not include any specific blocks or figures labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | The code is available at https://github.com/Diff Eq ML/kairos. |
| Open Datasets | Yes | We use data introduced in (Thuerey et al., 2020)..." and "We use the Scalar Flow dataset introduced in (Eckert et al., 2019)..." |
| Dataset Splits | No | The paper mentions training, testing, and sometimes implies validation (e.g., 'test performance', 'training runs'), but it does not specify explicit dataset split percentages (e.g., 80/10/10) or methods for creating those splits. |
| Hardware Specification | No | The paper mentions training times and computational speedups, but it does not specify the exact hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Weights & Biases (wandb)' but does not list specific software dependencies with their version numbers required for reproduction (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Training time (500 epochs) for T1 is cut to 20 minutes down from 40 of FNOs, matching the model speedup." and "All models truncate to m = 24, except FFNOs to m = 32." and "We perform a search on the most representative hyperparameters (detailed in the Appendix). |