Learning to Dequantise with Truncated Flows
Authors: Shawn Tan, Chin-Wei Huang, Alessandro Sordoni, Aaron Courville
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we benchmark TRUFL on constrained generation tasks, and find that it outperforms prior approaches. |
| Researcher Affiliation | Collaboration | Shawn Tan & Chin-Wei Huang Mila, University of Montreal {jing.shan.shawn.tan,chin-wei.huang}@umontreal.ca Alessandro Sordoni Microsoft Research Montreal alsordon@microsoft.com Aaron Courville Mila, University of Montreal Canada CIFAR AI Chair courvila@iro.umontreal.ca |
| Pseudocode | Yes | Algorithm 1 Truncated Categorical Encoding for a timestep t |
| Open Source Code | No | The paper mentions incorporating existing codebases from other sources, but it does not state that its own code for the described methodology is publicly available. |
| Open Datasets | Yes | Table 2: Performance on molecule generation trained on Zinc250k (Irwin et al., 2012)... |
| Dataset Splits | Yes | For the character level experiments on the Penn Treebank (PTB; Marcus et al. 1993), we use the setup in Cat NF, replacing the categorical encoding with TRUFL. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions using “the Cat NF codebase” and “the Argmax flow codebase” and general programming concepts, but it does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Table 6: New arguments for the LARGE graph colouring task we use in our experiments. Param. Value batch size 128 encoding dim 2 coupling num flows 10 optimizer 4 learning rate 3e-4 |