Sketch the Storyline with CHARCOAL: A Non-Parametric Approach
Authors: Siliang Tang, Fei Wu, Si Li, Weiming Lu, Zhongfei Zhang, Yueting Zhuang
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental analysis and results demonstrate both interpretability and superiority of the proposed approach. |
| Researcher Affiliation | Academia | Siliang Tang, Fei Wu, Si Li, Weiming Lu , Zhongfei Zhang, Yueting Zhuang College of Computer Science, Zhejiang University Hangzhou, China {siliang, wufei, lisi zzz, luwm, zhongfei, yzhuang}@zju.edu.cn |
| Pseudocode | Yes | Algorithm 1 Model Estimation of CHARCOAL |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | The paper mentions using "4,908 news articles from the New York Times International News section" and "collections of annotated news articles from New York Times with different major categories." It describes dataset characteristics in Table 2 but does not provide a specific link, DOI, or formal citation to a publicly available dataset for access. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages or sample counts) for train/validation/test sets, nor does it explicitly mention a validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions software like "Open NLP", "Stanford Named Entity Recognizer", and "Alchemy API" but does not provide specific version numbers for these or any other ancillary software components used in the experiments. |
| Experiment Setup | Yes | CHARCOAL and the two stage clustering models are initialised with story level concentration parameter α = 0.5, topic smooth β = 0.05, and time window size α = 8 days. the HDP inference is performed through a collapsed Gibbs sampler, with a burn-in of 500 samples; its hyperparameters are updated every 50 iterations. the h LDA inference is also performed by Gibbs sampling with a burnin period of 500 samples; its hyper-parameters are fixed and initialized as follows: topic smooth α = 0.1, word smooth η = 0.1, and n CRP concentration parameter λ = 1. |