Geometric Enclosing Networks

Authors: Trung Le, Hung Vu, Tu Dinh Nguyen, Dinh Phung

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on synthesis and real-world datasets to illustrate the behaviors, strength and weakness of our proposed GEN, in particular its ability to handle multi-modal data and quality of generated data.
Researcher Affiliation Academia Trung Le1, Hung Vu2, Tu Dinh Nguyen1 and Dinh Phung1 1 Faculty of Information Technology, Monash University 2 Center for Pattern Recognition and Data Analytics, Deakin University, Australia
Pseudocode Yes Algorithm 1 Algorithm for GEN.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is publicly available.
Open Datasets Yes The popular MNIST dataset [Lecun et al., 1998] contains 60, 000 images of digits from 0 to 9. Our experiments were further extended to generating color images of real-life objects (CIFAR-10 [Krizhevsky, 2009]) and human faces (Celeb A [Liu et al., 2015]). The Frey Face dataset [Roweis and Saul, 2000] contains approximately 2000 images of Brendan s face, taken from sequential frames of a small video.
Dataset Splits No The paper mentions training on subsets of MNIST (1,000, 5,000, and 60,000 images) and other datasets but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) to reproduce the partitioning.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions the 'ADAM optimizer' and 'neural network' but does not provide specific version numbers for any programming languages, libraries, or solvers used in the implementation.
Experiment Setup Yes Unless otherwise specified, in all our experiments, when stochastic gradient descent was used, the ADAM optimizer [Kingma and Ba, 2014] with learning rate empirically turned by around 1e-3 and 1e-4 will be employed. The neural network specification for our generator G (z) includes 2 hidden layers, each with 30 softplus units (and D = 100 for the number of random features, cf. Eq. (7)) and z Uni( 1, 1). our generator G (z) has the architecture of 1, 000 1, 000 1, 000 1, 000 (softplus units) and 784 sigmoid output units; and D = 5, 000 random features was used to construct Φ. we used a convolutional generator network with 512 256 128 1, 020 (rectified linear units) and sigmoid output units and trained a leaky rectified linear discriminator network with 3 layers 32 64 128.