Fast Fluid Simulation via Dynamic Multi-Scale Gridding
Authors: Jinxian Liu, Ye Chen, Bingbing Ni, Wei Ren, Zhenbo Yu, Xiaoyang Huang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments We conduct experiments on multiple datasets and compare accuracy and inference time with prior works. Quantitative and qualitative results show that our method has a comparable fluid simulation capability with obviously faster inference speed. As the visualization results show, our method achieves high visual fidelity simulations. Moreover, we perform sufficient ablation studies to show the effectiveness of each component and setting of hyper-parameters. |
| Researcher Affiliation | Collaboration | Jinxian Liu1, Ye Chen1, Bingbing Ni1*, Wei Ren2, Zhenbo Yu1, Xiaoyang Huang1 1Shanghai Jiao Tong University, Shanghai 200240, China 2Huawei Hisilicon |
| Pseudocode | Yes | Algorithm 1: Dynamic Multi-Scale Gridding (Inference version). |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | DPI Dam Break data is generated with Fle X, which is a position-based simulator that targets real-time applications.This data is used in both (Li et al.) and (Ummenhofer et al.). ... This dataset is generated with DFSPH (Bender and Koschier), which prioritizes simulation fidelity over runtime. |
| Dataset Splits | No | The paper specifies training and testing splits, but does not explicitly mention a validation set split. For example: "2000 scenes and 300 scenes are generated for training and testing respectively." and "200 scenes and 20 scenes are generated for training and testing respectively." |
| Hardware Specification | Yes | All runtimes of our results are measured on a system with an Intel Xeon 6150 CPU and an NVIDIA RTX 2080Ti. |
| Software Dependencies | No | The paper refers to various methods (e.g., CConv, DPI-Nets) and mentions that its Conv Net is based on Continuous Convolution, but it does not specify any software names with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Implementation Details We set the threshold δ used for generating multi-scale micelles for training data to 0.5. And the threshold ε for test data is set to 0.8. The minimum and the maximum number of particles of each micelle are set to 16 and 100 respectively. To limit the size of the micelles, we set the maximum depth of the Octree to 10 when generating multi-scale micelles for all data. We train our network for 50000 iterations with a batch size of 16 and an initial learning rate of 0.001. We half the learning rate at steps [20000, 25000, ..., 45000]. |