Neighbor Combinatorial Attention for Critical Structure Mining

Authors: Tanli Zuo, Yukun Qiu, Wei-Shi Zheng

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that NCAT achieves stateof-the-art performance on several benchmark graph classification datasets. In addition, we interpret what kind of features our model learned by visualizing the extracted critical structures. We conducted experiments on graph classification and hand-written digits datasets to evaluate our module.
Researcher Affiliation Academia 1School of Data and Computer Science, Sun Yat-sen University, China 2Peng Cheng Laboratory, Shenzhen 518005, China 3Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China
Pseudocode No The paper describes algorithms in text, but does not include structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes To evaluate our module, we chose three datasets have relatively large maximum degree and three datasets have relatively small maximum degree. All these datasets include node labels or features. Table 1 shows some basic information about those datasets. detailed information of the datasets can be found in [Yanardag and Vishwanathan, 2015].
Dataset Splits Yes Following the conventional validation approach, we performed 10-fold cross validation to evaluate the accuracy of our model. To ensure a fair comparison, we guaranteed that all 10 folds data were randomly shuffled, and the class distribution among folds did not vary.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions methods like 'Instance Normalization' and 'Adam optimizer' but does not specify any software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No The paper mentions using Instance Normalization, Adam optimizer, and cross-entropy loss, and details the 10-fold cross-validation setup. However, it does not provide specific hyperparameter values such as learning rate, batch size, or number of epochs.