Neural Conservation Laws: A Divergence-Free Perspective
Authors: Jack Richter-Powell, Yaron Lipman, Ricky T. Q. Chen
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we experimentally validate our approaches by computing neural network-based solutions to fluid equations, solving for the Hodge decomposition, and learning dynamical optimal transport maps. |
| Researcher Affiliation | Collaboration | Jack Richter-Powell Vector Institute jack.richter-powell@mcgill.ca Yaron Lipman Meta AI ylipman@meta.com Ricky T. Q. Chen Meta AI rtqichen@meta.com |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | Yes | Code for our experiments are available at https://github.com/facebookresearch/neural-conservation-law. |
| Open Datasets | No | The paper defines initial conditions for PDE simulations (e.g., 'ρ(0, x) = 3/2 x 2 v(0, x) = ( 2, x0 1, 1/2)' and 'ρ0(x, y) = (z1 + z3)2 + 1 v0(x, y) = [ez3, ez1/2]'). For optimal transport, it uses pairs of 2D densities like 'Circles Pinwheel' but does not provide access information (links, DOIs, or formal citations) for them as publicly available datasets. |
| Dataset Splits | No | The paper trains models for PDE solutions and optimal transport, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts). The problems involve continuous functions and initial conditions rather than discrete datasets with standard splits. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, or memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | The acknowledgements list software such as 'Py Torch [Paszke et al., 2019]', 'JAX [Bradbury et al., 2018]', 'numpy [Oliphant, 2006]', and 'Sci Py [Jones et al., 2014]', but do not provide specific version numbers for these software dependencies, only the publication year of their respective papers. |
| Experiment Setup | Yes | In Section 7.2, for dynamical optimal transport experiments, the paper states: 'Specifically, we train with the loss... where pi is a mixture between pi and a uniform density over a sufficiently large area, for i = 0, 1, and λ is a hyperparameter. We use a batch size of 256 and set K=128 (from equation 17).' |