Neural Manifold Ordinary Differential Equations
Authors: Aaron Lou, Derek Lim, Isay Katsman, Leo Huang, Qingxuan Jiang, Ser Nam Lim, Christopher M. De Sa
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 7 Experiments To test our MCNF models, we run density estimation and variational inference experiments. |
| Researcher Affiliation | Collaboration | Aaron Lou*, Derek Lim*, Isay Katsman*, Leo Huang*, Qingxuan Jiang Cornell University {al968, dl772, isk22, ah839, qj46}@cornell.edu Ser-Nam Lim Facebook AI sernam@gmail.com Christopher De Sa Cornell University cdesa@cs.cornell.edu |
| Pseudocode | Yes | Algorithm 1: Dynamic Chart Forward Pass Given f, local charts ϕx, starting condition zs and starting/ending times of ts, te. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of their own methodology. The mention of 'prerelease code' refers to code provided by Joey Bose for comparison purposes, not their own released implementation. |
| Open Datasets | Yes | We train a hyperbolic VAE [34] and Euclidean VAE [27] for variational inference on Binarized Omniglot [28] and Binarized MNIST [30]. |
| Dataset Splits | No | The paper mentions using Binarized Omniglot [28] and Binarized MNIST [30], which are standard datasets, but it does not explicitly state the training, validation, or test split percentages, sample counts, or specific methodology for splitting the data used in their experiments. It only states 'Full experimental details can be found in Appendix C', which primarily covers hyperparameters. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. It only vaguely mentions 'funding equipment' in the acknowledgements. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer [25]' but does not provide specific version numbers for any software dependencies, programming languages, or libraries used for the implementation or experiments. |
| Experiment Setup | Yes | For the models and baselines evaluated, we use 3 hidden layers for the Neural ODE block with 64 units per layer... For the MCNF, we used a constant learning rate of 1e-3, 1000 epochs, and a batch size of 64 for all experiments. We used the Adam optimizer [25]. |