Biased Random Walk based Social Regularization for Word Embeddings

Authors: Ziqian Zeng, Xin Liu, Yangqiu Song

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our random walk based social regularizations perform better on sentiment classification.
Researcher Affiliation Academia 1Department of CSE, The Hong Kong University of Science and Technology 2School of Data and Computer Science, Sun Yat-sen University
Pseudocode Yes Algorithm 1 SWE with regularization using Random Walks. Algorithm 2 Social Regularization
Open Source Code Yes The code is available at https://github.com/HKUSTKnowComp/SRBRW.
Open Datasets Yes We conducted all experiments on Yelp Challenge1 datasets which provide a lot of review texts along with large social networks. 1 https://www.yelp.com/dataset_challenge
Dataset Splits Yes We randomly split data to be 8:1:1 for training, developing, and testing identically for both training word embeddings and downstream tasks, in which we ensure that reviews published by the same user can be distributed to training, development, and test sets according to the proportion.
Hardware Specification Yes We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan GPU used for this research.
Software Dependencies No The paper mentions software like 'word2vec', 'CBOW', and 'Lib Linear', but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Finally, we set β = 0.5 in Yelp Round 9, β = 1.0 in Yelp Round 10, and l = 60, n = 10, p = 0.5, q = 1, α = 0.12 , λ = 8.0, r2 = 0.25 in both datasets. Unless we test the parameter sensitivity of our algorithms, we will fix all the hyper-parameters for the following experiments.