Representing Closed Transformation Paths in Encoded Network Latent Space
Authors: Marissa Connor, Christopher Rozell3666-3675
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments on data with natural closed transformation paths, we show that this model introduces the ability to learn the latent dynamics of complex systems, generate transformation paths, and classify samples that belong on the same transformation path. |
| Researcher Affiliation | Academia | Marissa C. Connor, Christopher J. Rozell Department of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332 (marissa.connor, crozell)@gatech.edu |
| Pseudocode | Yes | Algorithm 1 shows the pseudo-code for the training procedure during the fine-tuning phase. (Page 5) |
| Open Source Code | No | The paper discusses using code provided by other authors for baseline comparisons (e.g., 'Our hyperspherical VAE implementation came from the Nicola De Cao s github page.2'), but does not provide concrete access to their own source code. |
| Open Datasets | Yes | We train our autoencoder and transport operators using a subset of 50,000 images from the MNIST training set (Le Cun et al. 1998). ... The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217. |
| Dataset Splits | Yes | For training on MNIST digits we select 50,000 training digits from the traditional MNIST training set and save the additional 10,000 images for validation. We use the traditional MNIST test dataset for testing. ... We use sequences 1-16 for training, sequences 28 and 29 for validation, and sequences 30-34 for testing. |
| Hardware Specification | No | The paper states 'All experiments were run in pytorch.' but does not provide specific details on the hardware used, such as GPU/CPU models or other system specifications. |
| Software Dependencies | No | The paper mentions 'All experiments were run in pytorch.' but does not specify version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The training parameters are shown in Table 1. ... The training parameters are given in Table 3. ... The training parameters are given in Table 5. (These tables list specific batch sizes, learning rates, epochs/steps, and other hyper-parameters for each experimental phase and dataset.) |