Equivariant Manifold Flows
Authors: Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser Nam Lim, Christopher M. De Sa
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the utility of our approach by learning quantum field theorymotivated invariant SU(n) densities and by correcting meteor impact dataset bias. In this section, we utilize instantiations of equivariant manifold flows to learn densities over various manifolds of interest that are invariant to certain symmetries. First, we construct flows on SU(n)... We visualize our results in Figure 3. |
| Researcher Affiliation | Collaboration | Isay Katsman*, Aaron Lou*, Derek Lim*, Qingxuan Jiang* Cornell University {isk22, al968, dl772, qj46}@cornell.edu Ser-Nam Lim Facebook AI sernam@gmail.com Christopher De Sa Cornell University cdesa@cs.cornell.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for its methodology or a link to a code repository. |
| Open Datasets | Yes | We apply our isotropy invariant S2 flow (described in Section 5.2.1) to model the asteroid impact locations given by the dataset Meteorite Landings [31]. [31] Meteorite Landings. Meteorite landings dataset, March 2017. Retrieved from https://data.world/nasa/meteorite-landings. |
| Dataset Splits | No | The paper does not provide specific training/test/validation dataset splits, such as percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments. It only vaguely mentions 'funding equipment' from Facebook AI. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | We train for 100 epochs with a learning rate of 0.001 and a batch size of 200; our results are shown in Figure 4. |