Learning Vortex Dynamics for Fluid Inference and Prediction
Authors: Yitong Deng, Hong-Xing Yu, Jiajun Wu, Bo Zhu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our method with a range of existing methods on both synthetic and real-world videos, demonstrating improved reconstruction quality, visual plausibility, and physical integrity. |
| Researcher Affiliation | Academia | Yitong Deng Dartmouth College Stanford University Hong-Xing Yu Stanford University Jiajun Wu Stanford University Bo Zhu Dartmouth College |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Our video results, code, and data can be found at our project website: https://yitongdeng. github.io/vortex_learning_webpage. |
| Open Datasets | Yes | We conduct benchmark testing on synthetic videos generated using high-order numeric simulation schemes as well as real-world videos in the wild. |
| Dataset Splits | No | Only the first 100 frames will be disclosed to train all methods, and future predictions are tested and examined on the following 200 frames. The video has 150 frames: the first 100 frames will be used for training, while the remaining 50 frames will be reserved for testing. The paper does not specify a distinct validation split for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | Running on a laptop with Nvidia RTX 3070 Ti and Intel Core i7-12700H, our model takes around 0.4s per training iteration, and around 40000 iterations to converge (for a 256 256 video with 100 frames). |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and implicitly neural network frameworks (e.g., for N1 and N2), but it does not specify explicit version numbers for these software dependencies (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | We use the Adam optimizer with β1 = 0.9, β2 = 0.999, and learning rates 0.0003, 0.001, 0.005, and 0.005 for N1, N2, Ωand respectively. We use a step learning rate scheduler and set the learning rate to decay to 0.1 of the original value at iteration 20000. We use a batch size of 4, so for each iteration, 4 starting times are picked uniformly randomly among [0, 1, . . . , t E] for evaluation. The sliding-window size m is set to 2. |