Action Matching: Learning Stochastic Dynamics from Samples
Authors: Kirill Neklyudov, Rob Brekelmans, Daniel Severo, Alireza Makhzani
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we showcase applications of Action Matching by achieving competitive performance in a diverse set of experiments from biology, physics, and generative modeling. Sections like '4. Applications of Action Matching' detail empirical studies on 'Synthetic Data', 'Embryoid sc RNA-Seq Data', 'Quantum System Simulation', and 'Generative Modeling' with performance metrics in tables (e.g., Table 1, 2, 3). |
| Researcher Affiliation | Collaboration | 1Vector Institute 2University of Toronto. |
| Pseudocode | Yes | Algorithm 1 Action Matching; Algorithm 2 Generative Modeling using Action Matching (In Practice); Algorithm 3 Annealed Langevin Dynamics for the Schrödinger Equation; Algorithm 4 Annealed Langevin Dynamics for the Image Generation. |
| Open Source Code | Yes | 1Notebooks with pedagogical examples of AM are given at github.com/necludov/jam#tutorials. 3The code is available at github.com/necludov/jam |
| Open Datasets | Yes | For evaluation, we choose the CIFAR-10 dataset of natural images. For a real data example, we consider a embryoid body single-cell RNA sequencing dataset from Moon et al. (2019). |
| Dataset Splits | No | The paper uses standard datasets like CIFAR-10 and embryoid body sc RNA-seq data but does not explicitly state the train/validation/test splits, percentages, or methodology for these datasets. It refers to 'test data marginals' in Table 1 but provides no details on how the dataset was split into train, validation, and test sets. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments (e.g., GPU models, CPU models, or cloud computing specifications). It only states 'we use known deep learning architectures' and discusses software-level aspects without hardware specifics. |
| Software Dependencies | No | The paper mentions software components like 'Python', 'PyTorch', and 'JAX' (in reference to a toolbox by Cuturi et al., 2022), and 'U-net architecture', but does not provide specific version numbers for any of these components, which are necessary for full reproducibility of software dependencies. |
| Experiment Setup | Yes | We train all models using the same architecture for 500k iterations and evaluate the negative log-likelihood, FID (Heusel et al., 2017) and IS (Salimans et al., 2016). For propagating samples in time we select the time step dt = 10^-2 and perform 10 sampling steps for every qt. We additionally run 100 sampling steps for the final distribution. In total we run 1000 steps to generate images. |