Affine Equivariant Autoencoder

Authors: Xifeng Guo, En Zhu, Xinwang Liu, Jianping Yin

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted to validate the equivariance and discriminative ability of the features learned by our affine equivariant autoencoder.
Researcher Affiliation Academia Xifeng Guo1 , En Zhu1 , Xinwang Liu1 and Jianping Yin2 1College of Computer, National University of Defense Technology, Changsha 410073, China 2Dongguan University of Technology, Dongguan, China guoxifeng1990@163.com, {enzhu, xinwangliu}@nudt.edu.cn, jpyin@dgut.edu.cn
Pseudocode No The paper describes the model architecture and training process in text and diagrams (e.g., Figure 1, Figure 2), but it does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code Yes The source code is publicly available on https://github.com/Xifeng Guo/AEAE.
Open Datasets Yes Four image datasets are used in our experiments: MNIST-full: A popular handwritten digit dataset with 70,000 examples [Le Cun et al., 1998]. MNIST-test: A dataset that only contains the test set of MNIST-full, with 10,000 examples... USPS1: A dataset contains 9298 gray digit images... Fashion: A dataset of Zalando s article images [Xiao et al., 2017]... 1http://www.cad.zju.edu.cn/home/dengcai/Data/MLData.html
Dataset Splits No The paper mentions dividing datasets into training and testing sets (e.g., "Each dataset is divided into training and testing set by the ratio of 6 : 1") and using cross-validation for parameter selection ("We select the best penalty parameter C of the error term from [20, 21, . . . , 29] by cross validation"). However, it does not explicitly define a separate "validation set" with specific split percentages or sample counts for model tuning or early stopping.
Hardware Specification No The paper describes the model architecture, optimization settings, and dataset details, but it does not specify any particular hardware used for running the experiments (e.g., CPU, GPU models, memory, or cloud instances).
Software Dependencies No The paper mentions using "Adam [Kingma and Ba, 2014] optimizer" and activation functions like "ReLU [Glorot et al., 2011]" and initialization following "[He et al., 2015]", but it does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes The detailed structure of the affine equivariant autoencoder (AEAE). The model consists of eight fully connected layers. The number of neurons in each layer is listed at top, where c, w, h are the number of channels, width, and height of the input image... Except the embedding (with 15 neurons) and output layers are followed by linear and sigmoid activations, respectively, all internal layers are activated by Re LU [Glorot et al., 2011]... The AEAE is trained in an end-to-end manner by using Adam [Kingma and Ba, 2014] optimizer with initial learning rate 0.001, β1 = 0.9, and β2 = 0.999. The maximum number of epochs is set to 100 for large datasets (n > 10000) and 500 for small ones (n <= 10000). The mini-batch size is fixed to 256.