PETAL: Physics Emulation Through Averaged Linearizations for Solving Inverse Problems
Authors: Jihui Jin, Etienne Ollivier, Richard Touret, Matthew McKinley, Karim Sabra, Justin Romberg
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy on an ocean acoustic tomography (OAT) example that aims to recover ocean sound speed profile (SSP) variations from acoustic observations (e.g. eigenray arrival times) within simulation of ocean dynamics in the Gulf of Mexico. The full results are summarized in Table 1. |
| Researcher Affiliation | Academia | Jihui Jin Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332 jihui@gatech.edu Etienne Ollivier Mechanical Engineering Georgia Institute of Technology Atlanta, GA 30332 Richard Touret Mechanical Engineering Georgia Institute of Technology Atlanta, GA 30332 Matthew Mc Kinley Mechanical Engineering Georgia Institute of Technology Atlanta, GA 30332 Karim G. Sabra Mechanical Engineering Georgia Institute of Technology Atlanta, GA 30332 Justin K. Romberg Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332 |
| Pseudocode | No | The paper describes the architecture and methods in text and diagrams (Figure 1), but it does not contain any explicit pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We perform our experiments on a high fidelity month long simulation of the Gulf of Mexico as seen in Figure 2 [19, 20]. |
| Dataset Splits | Yes | The first 1000 time samples are used for training, the next 200 for validation and the remaining 239 for testing for each slice. |
| Hardware Specification | Yes | All experiments were performed on a Ge Force RTX 2080 Super. |
| Software Dependencies | No | The paper mentions the use of 'PyTorch' and 'ADAMW' but does not provide specific version numbers for these software dependencies (e.g., 'PyTorch 1.9' or 'Python 3.8'). |
| Experiment Setup | Yes | The model was trained using ADAMW with a learning rate of 1e-5 for 500 epochs. The learning rate was dropped by a factor of 0.2 at epoch 300. All models are optimized using Pytorch s Stochastic Gradient Descent with a learning rate of 50 for 1000 epochs. We use two forms of regularization: an ℓ2 penalty on x with a scale of 1e-7 and a Sobolev penalty (ℓ2 on the discrete x and y gradient) with a scale of 1e-4. |