Grassmann Manifold Flows for Stable Shape Generation
Authors: Ryoma Yataka, Kazuki Hirashima, Masashi Shiraishi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results indicated that the proposed method could generate high-quality samples by capturing the data structure. |
| Researcher Affiliation | Industry | 1Information Technology R&D Center (ITC), Mitsubishi Electric Corporation, Kamakura, Kanagawa, 247-8501, Japan 2Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA 02139, USA |
| Pseudocode | Yes | Algorithm 1 Random Sampling from p Gr(k,D) ([X]; [M], U, V ) |
| Open Source Code | No | The paper mentions using and referencing third-party implementations (e.g., '5We used the authors implementation: https://github.com/rtqichen/torchdiffeq.git. 6We used the authors implementations: https://github.com/ANLGBOY/Soft Flow.git and https: //github.com/rtqichen/ffjord.git.'), but does not provide a specific link or explicit statement about the open-sourcing of *their* specific implementation code for the proposed method. |
| Open Datasets | Yes | The code for the data distributions is presented in Appendix C.3.1 |
| Dataset Splits | Yes | To match the experiment conducted in Garcia Satorras et al. (2021), 1, 000 validation and testing samples each were used for both datasets. |
| Hardware Specification | Yes | The experimental hardware was built with an Intel Core i7-9700 CPU and a single NVIDIA GTX 1060 GPU with 6 GB of RAM (for Artificial Textures). For DW4 and LJ13, a single NVIDIA Quadro RTX 8000 GPU with 48 GB of GDDR6 RAM was used. For QM9 Positional, a single NVIDIA A100 GPU with 80GB PCIe of GDDR6 RAM was used. |
| Software Dependencies | No | The paper mentions software like 'Py Torch (Paszke et al. (2019))' and 'torch.autograd.grad (Paszke et al. (2017))', but it does not provide explicit version numbers (e.g., PyTorch 1.x, Python 3.x) for these dependencies. |
| Experiment Setup | Yes | Table 4 provides a 'List of hyperparameters used in various experiments' including 'Learning Rate', 'Batch Size', 'Integration Time', 'atol', 'rtol', and 'Adjoint'. Sections C.3.1, C.3.2, and C.3.3 also describe 'Network Architecture' and 'Hyper-parameters'. |