Prompt-enhanced Network for Hateful Meme Classification
Authors: Junxi Liu, Yanyan Feng, Jiehai Chen, Yun Xue, Fenghuan Li
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive ablation experiments on two public datasets, we evaluate the effectiveness of the Pen framework, concurrently comparing it with state-of-the-art model baselines. |
| Researcher Affiliation | Academia | 1School of Electronics and Information Engineering, South China Normal University, Foshan 528225, China, 2School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China {liujunxi,fengyanyan,cjh scnu,xueyun}@m.scnu.edu.cn, fhli20180910@gdut.edu.cn |
| Pseudocode | No | The paper uses formulas and block diagrams but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/juszzi/Pen. |
| Open Datasets | Yes | We conducted evaluations using two publicly available datasets: (1) FHM [Kiela et al., 2020] and (2) Har M [Pramanick et al., 2021a]. |
| Dataset Splits | No | Table 1 provides 'Training' and 'Test' counts for the datasets, but it does not explicitly mention or quantify a 'validation' split. While a validation set is commonly used in machine learning, its specific details are not provided in the text or tables. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'Roberta-large model[Liu et al., 2019]' but does not specify version numbers for other key software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | While the paper mentions using cross-entropy loss and hyperparameters alpha and beta for the loss function, it lacks detailed specifications for other crucial training parameters such as learning rate, batch size, specific optimizer used, number of epochs, or other model initialization and training schedules. These details are essential for a complete experimental setup description. |