Neural Physical Simulation with Multi-Resolution Hash Grid Encoding

Authors: Haoxiang Wang, Tao Yu, Tianwei Yang, Hui Qiao, Qionghai Dai

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and results under different simulation tasks demonstrate the effectiveness, efficiency, and potential of our method. Our method shows higher accuracy and strong flexibility for various simulation problems: e.g., large elastic deformations, complex fluid dynamics, and multi-scale phenomena which remain challenging for existing neural physical solvers.
Researcher Affiliation Academia Haoxiang Wang1, Tao Yu1 ,Tianwei Yang1, Hui Qiao1, 2 ,Qionghai Dai1 1 Department of Automation & BNRist, Tsinghua University 2 Shanghai Artificial Intelligence Laboratory
Pseudocode No The paper describes methods and equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing open-source code for the described methodology or links to a code repository.
Open Datasets No The paper describes generating 'ground truth' data through high-resolution simulations for comparison, but does not provide access information (link, DOI, specific citation with authors/year) for a publicly available or open dataset used for training.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and testing.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running experiments are provided.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper describes the overall framework and certain methods (e.g., finite difference), but it does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs), optimizer settings, or detailed training configurations.