Tri-Party Deep Network Representation

Authors: Shirui Pan, Jia Wu, Xingquan Zhu, Chengqi Zhang, Yang Wang

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimental Results 5.1 Experimental Setup We report our experimental results on two networks. Both of them are citation networks and we use the paper title as node content for each node in the networks. DBLP dataset 1 consists of bibliography data in computer science [Tang et al., 2008]. ... Cite Seer-M10 is a subset [Lim and Buntine, 2014] of Cite Seer X data 2 which consist of scientific publications from 10 distinct research areas: ... Table 1: Average Macro F1 Score and Standard Deviation on Citeseer-M10 Network Table 2: Average Macro F1 Score and Standard Deviation on DBLP Network
Researcher Affiliation Academia Centre for Quantum Computation & Intelligent System, FEIT, University of Technology Sydney Dept. of Computer & Electrical Engineering and Computer Science, Florida Atlantic University, USA The University of New South Wales, Australia
Pseudocode No The paper describes the model architecture and steps in prose and diagrams (Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets Yes DBLP dataset 1 consists of bibliography data in computer science [Tang et al., 2008]. ... Cite Seer-M10 is a subset [Lim and Buntine, 2014] of Cite Seer X data 2 which consist of scientific publications from 10 distinct research areas: agriculture, archaeology, biology, computer science, financial economics, industrial engineering, material science, petroleum chemistry, physics, and social science.
Dataset Splits No The paper states, 'In each network, p% nodes are randomly labeled, the rest are unlabeled.' and 'We vary the percentages of training samples p% from 10% to 70%'. It then says, 'we train a linear SVM from the training data (nodes) to predict unlabeled nodes.' There is no explicit mention of a separate validation set split.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions various algorithms and models (e.g., Deep Walk, Doc2Vec, LDA, linear SVM) but does not provide specific version numbers for any software components or libraries used for implementation.
Experiment Setup Yes The default parameter for Tri DNR are set as follows: window size b=8, dimensions k =300, training size p = 30%, = 0.8. For fairness of comparison, all comparing algorithms will use the same same number of features k. The parameters for other algorithms will keep the same or be close to Tri DNR as much as possible. For instance, for all neural network models, we use window size b = 8.