Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and Cross-Attention
Authors: Hongyan Ran, Caiyan Jia
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance. |
| Researcher Affiliation | Academia | Hongyan Ran, Caiyan Jia* School of Computer and Information Technology & Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing 100044, China {hongyran,cyjia}@bjtu.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper links to 'Supplementary Material' (https://github.com/rhy1111/Supplementary Material) but does not explicitly state that the source code for the methodology is provided at this link or elsewhere. |
| Open Datasets | Yes | We evaluate the UCD-RD model on four groups of real-world cross-domain rumor datasets. The first group of data comes from PHEME (Zubiaga et al. 2015) dataset... The second group of cross-domain data is Twitter dataset (Ma, Gao, and Wong 2017) and Twitter-Covid19 dataset (Lin et al. 2022). The third group of datasets includes the Twitter15 dataset and the Twitter16 dataset (Ma, Gao, and Wong 2018). The fourth group of cross-domain data is the Chinese Weibo dataset (Ma et al. 2016) and the Weibo-Covid19 dataset (Lin et al. 2022). |
| Dataset Splits | No | The paper mentions training and testing but does not provide specific details on a validation dataset split (percentages, counts, or explicit use of a 'validation set'). The target data is used for pseudo-labeling, not explicit validation with ground truth. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or specific cloud instances) used for running the experiments, only general software frameworks are mentioned. |
| Software Dependencies | No | The paper mentions 'Keras3' and 'Pytorch4' which refer to frameworks but do not provide specific version numbers required for reproducible software dependencies. |
| Experiment Setup | Yes | The training process is iterated upon 300 epochs. The temperature τ is 0.1. For instance, for the Terrorist Gossip data, when these hyper parameters α1 = 0.9, α2 = 0.1, β1 = 0.7, β2 = 0.3, and γ1 = 0.8, γ2 = 0.1, γ3 = 0.1, UCD-RD achieves the best performance. |