Semantic Visualization for Short Texts with Word Embeddings
Authors: Tuan M. V. Le, Hady W. Lauw
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on two public real-life short text datasets in Section 4. To validate our joint modeling, one class of baselines consist of pipelined approaches that apply dimensionality reduction to the outputs of topic models with word embeddings. To validate our modeling of word embeddings, the other class of baselines consist of semantic visualization models not using word vectors. |
| Researcher Affiliation | Academia | Tuan M. V. Le School of Information Systems Singapore Management University vmtle.2012@phdis.smu.edu.sg Hady W. Lauw School of Information Systems Singapore Management University hadywlauw@smu.edu.sg |
| Pseudocode | No | The paper describes a 'Generative Process' and 'Parameter Estimation' with mathematical formulations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper provides links to the implementations of *baseline* models and external resources (e.g., Word2Vec) but does not provide a link or explicit statement for the open-source code of their proposed Gaussian SV model. |
| Open Datasets | Yes | Datasets. We use short texts from two public datasets. The first is BBC1 [Greene and Cunningham, 2006], which consists of 2,225 BBC news articles from 2004-2005, divided into 5 classes. We only use the title and headline of an article. The second is Search Snippet2 [Phan et al., 2008], which consists of 12,340 Web search snippets belonging to 8 classes. Footnote 1: http://mlg.ucd.ie/datasets/bbc.html Footnote 2: http://jwebpro.sourceforge.net/data-web-snippets.tar.gz |
| Dataset Splits | No | The paper describes data sampling and repeated runs for evaluation, but does not specify a conventional train/validation/test split for the model learning process itself with percentages or sample counts. |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments (e.g., CPU, GPU models, or memory details). |
| Software Dependencies | No | The paper mentions using 'pre-trained 300-d word vectors from Word2Vec trained on Google News' and refers to author implementations of baseline models, but it does not list specific software dependencies with version numbers for their proposed Gaussian SV model. |
| Experiment Setup | Yes | We choose appropriate values for σ0 and σ. σ0 = 10000 and σ = 100 work well for most of the cases in our experiments. For each dataset, we sample 50 documents per class to create a well-balanced dataset. Each sample of Search Snippet has 400 documents, and that of BBC has 250 documents respectively. As the methods are probabilistic, we create 5 samples for each dataset, and run each sample 5 times. The reported performance numbers are averaged across 25 runs. We remove stopwords, perform stemming, and remove words that do not have pretrained word vectors. To update φz and xn, we use gradient-based numerical optimization method such as the quasi-Newton method [Liu and Nocedal, 1989]. We alternate the Eand M-steps until some appropriate convergence criterion is reached. |