Semi-Discrete Normalizing Flows through Differentiable Tessellation
Authors: Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show improvements over existing methods across a range of structured data modalities. |
| Researcher Affiliation | Industry | Meta AI |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Open source code is available at https://github.com/facebookresearch/semi-discrete-flow. |
| Open Datasets | Yes | We use unprocessed data sets from the UCI database [9]. |
| Dataset Splits | Yes | We then take 80% as train, 10% as validation, and 10% as the test set. |
| Hardware Specification | No | The paper states "All models were trained... on a single GPU," but does not specify the model or any other hardware details. |
| Software Dependencies | No | The paper mentions "Adam optimizer [22]" and refers to "torchdiffeq, 2018" from [5] for FFJORD, but does not list specific version numbers for multiple key software components like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | All models were trained using the Adam optimizer [22] with a learning rate of 1e-3, cosine annealing scheduler, and a batch size of 256 for 500 epochs on a single GPU. |