Variational Graph Embedding and Clustering with Laplacian Eigenmaps

Authors: Zitai Chen, Chuan Chen, Zong Zhang, Zibin Zheng, Qingsong Zou

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments on both synthetic and real-world networks to corroborate the effectiveness and efficiency of the proposed framework. In this section, we use four benchmark real-world datasets to demonstrate the effectiveness of VGECLE. We provide quantitative comparisons of VGECLE with other state-of-the-art clustering methods in two categories: shallow models and deep learning models. The experimental results show significant improvements with respect to the baselines.
Researcher Affiliation Academia Zitai Chen1,2 , Chuan Chen1,2 , Zong Zhang1 , Zibin Zheng1,2 and Qingsong Zou1,3 1School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China 2Guangdong Key Laboratory for Big Data Analysis and Simulation of Public Opinion, School of Communication and Design, Sun Yat-sen University, Guangzhou, China 3Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, China
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any specific links or explicit statements about the availability of its source code.
Open Datasets Yes To evaluate the effectiveness and efficiency of the proposed framework, we employ three networked datasets: Cora, Blog Catalog, and Flickr. All the networks are publicly available, and also undirected. The statistics of the datasets are summarized in Table 1.
Dataset Splits No In the unsupervised clustering scenario, we are not capable of determining network structure by cross-validation on a validation set.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions using an autoencoder for pretraining but does not specify any software names with version numbers, such as programming languages, libraries, or frameworks.
Experiment Setup Yes We set the network dimensions to input 500 100 D, where input is the dimension of adjacency vector aka n. The learning rate for Cora, Blog Catalog and Flickr2 is 0.01 and decreases every 100 epochs with a decay rate of 0.9. And The learning rate for Flickr1 is 0.001 and decreases every 100 epochs with a decay rate of 0.9. We set the dimension of the embeddings to 10 in all experiments. β is for balancing the trade-off between reconstruction loss and the pairwise relationship. the performance of β = 0.0001 is better than the others.