Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning
Authors: Feng Lin, Haohang Xu, Houqiang Li, Hongkai Xiong, Guo-Jun Qi8610-8617
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the proposed approach to Auto-Encoding Transformations exhibits superior performances on a variety of recognition problems. In this section, we present our experiment results by comparing the AETv2 with the AETv1 as well as the other unsupervised models. |
| Researcher Affiliation | Collaboration | Feng Lin,1 Haohang Xu,2 Houqiang Li,1,4 Hongkai Xiong,2 Guo-Jun Qi3,* 1 CAS Key Laboratory of GIPAS, EEIS Department, University of Science and Technology of China 2 Department of Electronic Engineering, Shanghai Jiao Tong University 3 Laboratory for MAPLE, Futurewei Technologies 4 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | No | The paper does not include a structured pseudocode or algorithm block. The methodology is described in text and through a pipeline diagram. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Following the standard evaluation protocol in literature (Zhang et al. 2019; Qi et al. 2019; Oyallon and Mallat 2015; Dosovitskiy et al. 2014; Radford, Metz, and Chintala 2015; Oyallon, Belilovsky, and Zagoruyko 2017; Gidaris, Singh, and Komodakis 2018), we will adopt downstream classification tasks to evaluate the learned representations on CIFAR10, Image Net, and Places datasets. |
| Dataset Splits | Yes | Following the standard evaluation protocol in literature (Zhang et al. 2019; Qi et al. 2019; Oyallon and Mallat 2015; Dosovitskiy et al. 2014; Radford, Metz, and Chintala 2015; Oyallon, Belilovsky, and Zagoruyko 2017; Gidaris, Singh, and Komodakis 2018), we will adopt downstream classification tasks to evaluate the learned representations on CIFAR10, Image Net, and Places datasets. A classifier is then built on top of the second convolutional block to evaluate the quality of the learned representation following the standard protocol in literature (Zhang et al. 2019; Qi et al. 2019; Oyallon and Mallat 2015; Dosovit-skiy et al. 2014; Radford, Metz, and Chintala 2015; Oyallon, Belilovsky, and Zagoruyko 2017; Gidaris, Singh, and Komodakis 2018). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud computing instances used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like the 'Adam solver' but does not provide specific version numbers for any software dependencies required to replicate the experiment. |
| Experiment Setup | Yes | The model is trained by the Adam solver with a learning rate of 10 5, a value of 0.9 and 0.999 for β1 and β2, and a weight decay rate of 5 10 4. ...train the network with a batch size of 768 original and transformed images. |