Low-Bit Quantization for Attributed Network Representation Learning
Authors: Hong Yang, Shirui Pan, Ling Chen, Chuan Zhou, Peng Zhang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model. |
| Researcher Affiliation | Collaboration | Hong Yang1 , Shirui Pan2 , Ling Chen1 , Chuan Zhou3 and Peng Zhang4 1Centre for Artificial Intelligence, University of Technology Sydney, Australia 2Faculty of Information Technology, Monash University, Australia 3Institute of Information Engineering, Chinese Academy of Sciences, China 4Ant Financial Services Group, Hangzhou, China |
| Pseudocode | Yes | Algorithm 1 The LQANR model |
| Open Source Code | No | The paper does not provide any explicit statement about open-source code availability or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Datasets. Three real-world attributed networks are used as testbed. They are popularly used in many network embedding works such as [Yang et al., 2015; Huang et al., 2017b]. We summarize the statistics of datasets in Table 1. |
| Dataset Splits | Yes | For node classification experiment, we use SVM [Fan et al., 2008] as the classifier. The training ratios range from 10% to 90% for all the datasets. We use 10-fold cross validation. The performance are evaluated in terms of Micro-F1 (%) Macro-F1(%). |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using SVM and other baselines, but does not specify any software names with version numbers for reproducibility (e.g., Python, PyTorch, specific libraries and their versions). |
| Experiment Setup | Yes | We set the embedding dimension d = 100 for all baselines. All the parameters are set to be the default values. ... the bit-width used for each node s representation is {−1, 0, +1}, and the regularization parameter β is set to 0.001. |