Joint Generative Moment-Matching Network for Learning Structural Latent Code

Authors: Hongchang Gao, Heng Huang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental At last, extensive experiments on both synthetic and realworld datasets have verified the effectiveness and correctness of our proposed JGMMN.
Researcher Affiliation Academia Hongchang Gao, Heng Huang Department of Electrical and Computer Engineering, University of Pittsburgh, USA hongchanggao@gmail.com, heng.huang@pitt.edu
Pseudocode No The paper includes mathematical formulations and descriptions of processes but does not provide a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets Yes We conduct experiments on five real-world datasets, which includes 3 image datasets: MNIST [Le Cun et al., 1998], USPS [Cai et al., 2011] , Extend Yale B (EYB), and 2 text datatsets: Reuters-10K [Xie et al., 2016], 20News1.
Dataset Splits No The paper mentions synthetic and real-world datasets and some sample counts, but it does not provide explicit training, validation, or test dataset splits (e.g., percentages or exact counts for each split).
Hardware Specification No The paper does not provide any specific details regarding the hardware used to conduct the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions activation functions and network types but does not list specific software dependencies, such as programming languages, libraries, or frameworks with their version numbers.
Experiment Setup Yes Both the generation and inference network of the first synthetic dataset is a 3-layer MLP: [100, 100, 2]. Similarly, that of the second synthetic dataset is [100, 300, 2]. The activation function employed is Re LU [Nair and Hinton, 2010]. Note that the last layer employs the linear activation. The size of mini-batch is set as 500. The kernel employed in JMMD is the Gaussian kernel. Here, we use a mixture of several Gaussian kernels, that is k(xi, xj) = M m=1 km(xi, xj) where different kernels have different bandwidth parameters. In this paper, the bandwidth employed is {2.0, 5.0, 10.0, 20.0, 40.0, 80.0}.