Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention for Social Text Classification

Authors: Karish Grover, S M Phaneendra Angara, Md Shad Akhtar, Tanmoy Chakraborty

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four different social-text classification tasks, namely detecting fake news, hate speech, rumour, and sarcasm, show that Hyphen generalises well, and achieves state-of-the-art results on ten benchmark datasets.
Researcher Affiliation Collaboration Karish Grover IIIT Delhi India karish19471@iiitd.ac.in S.M. Phaneendra Angara Linked In India sangara@linkedin.com Md. Shad Akhtar IIIT Delhi India shad.akhtar@iiitd.ac.in Tanmoy Chakraborty IIT Delhi India tanchak@ee.iitd.ac.in
Pseudocode No The paper describes the architecture and processes in text and diagrams but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/LCS2-IIIIITD/Hyphen.
Open Datasets Yes We evaluate the performance of Hyphen on four different social-text classification tasks across ten datasets (c.f. Table 1) (i) fake news detection (Politifact [43], Gossipcop [43], Anti Vax [44]), (ii) hate speech detection (HASOC [45]), (iii) rumour detection (Pheme [46], Twitter15 [47], Twitter16 [47], Rumour Eval [48]), and (iv) sarcasm detection (Fig Lang-Twitter[49], Fig Lang-Reddit [49]).
Dataset Splits No The paper mentions 'early stopping patience of 10 epochs', which implies the use of a validation set, but it does not provide specific details on the training, validation, or test data splits (e.g., percentages or sample counts) in the main text.
Hardware Specification Yes We run all experiments for 100 epochs with early stopping patience of 10 epochs, on a NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions using 'Riemannian Adam from Geoopt [61]' for optimization, but it does not specify version numbers for Geoopt or any other software dependencies like Python, PyTorch, etc.
Experiment Setup Yes To find the optimal k (latent dimension, see Equation 5) for hyperbolic co-attention, we run grid search over k = 50, 80, 128, 256, and finally use k = 128. For HGCN, we use two layers with curvatures K1 = K2 = -1. We run all experiments for 100 epochs with early stopping patience of 10 epochs.