Sparse Word Embeddings Using

Authors:

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed model is evaluated on both expressive power and interpretability. The results show that, compared with the original CBOW model, the proposed model can obtain state-of-the-art results with better interpretability using less than 10% non-zero elements.
Researcher Affiliation Academia CAS Key Lab of Network Data Science and Technology Institute of Computing Technology, Chinese Academy of Sciences, China
Pseudocode Yes Algorithm 1 RDA algorithm for Sparse CBOW
Open Source Code No The paper refers to released tools for baseline models (GloVe, CBOW, SG, SC) but does not provide concrete access or an explicit statement of release for its own Sparse CBOW model's source code.
Open Datasets Yes We take the widely used Wikipedia April 2010 dump6 [Shaoul and Westbury, 2010] as the corpus to train all the models. 6http://www.psych.ualberta.ca/ westburylab/downloads/westburylab.wikicorp.download.html
Dataset Splits Yes For the 1 regularization penalty λ, we perform a grid search on it and select the value that maximizing performance on one development testset (a small subset of Word Sim-3537) while achieving at least 90% sparsity in word vectors.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'the SPAMS package' for NNSE implementation but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes we set the context window size as 10 and use 10 negative samples. Like CBOW, we set the initial learning rate of Sparse CBOW model as = 0.05 and decrease it linearly to zero at the end of the last training epoch. For the 1 regularization penalty λ, we perform a grid search on it...