Inferring Hybrid Neural Fluid Fields from Videos

Authors: Hong-Xing Yu, Yang Zheng, Yuan Gao, Yitong Deng, Bo Zhu, Jiajun Wu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments
Researcher Affiliation Academia Stanford University1 Georgia Institute of Technology2
Pseudocode No The paper describes methods and processes like the pressure projection solver and Mac Cormack method, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The project website: 'https://kovenyu.com/Hy Fluid/' is provided, but the paper does not contain an explicit statement that the source code for the methodology is released or provide a direct link to a code repository.
Open Datasets Yes For real captures, we use the Scalar Flow dataset (Eckert et al., 2019) which consists of videos of buoyancy-driven rising smoke plumes.
Dataset Splits No The paper states, 'For each scene, we use four videos for training, and one held-out video for testing (i.e., as the groundtruth for the novel view),' which specifies training and test splits but does not mention a separate validation split.
Hardware Specification Yes We train our model on a single A100 GPU for around 9 hours in total.
Software Dependencies No The paper mentions 'PyTorch' and 'Taichi (Hu et al., 2019) language' as software used, but it does not provide specific version numbers for these or any other key software dependencies.
Experiment Setup Yes We use an Adam optimizer with a learning rate 0.01. In the first stage, we train the density and radiance for 200, 000 iterations. In the second stage, we jointly train the model for 50, 000 iterations. In the third stage, we do training for 5, 000 iterations. We empirically set the loss weights to βrender = 10, 000, βdensity = 0.001, βproj = 1, βlaminar = 10. For the laminar loss, we set the coefficient γ = 0.2.