Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

BeyondGender: A Multifaceted Bilingual Dataset for Practical Sexism Detection

Authors: Xuan Luo, Li Yang, Han Zhang, Geng Tu, Qianlong Wang, Keyang Ding, Chuang Fan, Jing Li, Ruifeng Xu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluations of masked language models and large language models reveal that they detect misogyny in English and misandry in Chinese more effectively, with F1-scores of 0.87 and 0.62, respectively. However, they frequently misclassify hostile and mild comments, underscoring the complexity of sexism detection.
Researcher Affiliation Academia 1Harbin Institute of Technology, Shenzhen, China 2Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China 3Peng Cheng Laboratory, Shenzhen, China 4Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies, China 5Research Centre on Data Science & Artificial Intelligence, Hong Kong, China
Pseudocode No The paper describes the annotation workflow and provides examples of annotated data. It discusses the methodology for evaluating models and the experimental setup, but it does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes Task descriptions declared in the Prefix for other labels will be shared with code and data.
Open Datasets Yes We introduce Beyond Gender, a dataset meticulously annotated according to the latest definitions of misogyny and misandry. Beyond Gender will be made available on Git Hub. We also leverage recent sexism detection datasets because they represent the contemporary sexism culture and they are collected from different social platforms: 1) English dataset EDOS (Kirk et al. 2023) and 2) Chinese dataset SWSR (Jiang et al. 2022).
Dataset Splits Yes The split of the dataset is listed in Table 3. Language Label Sexism Other Labels train dev test train dev test English 10,233 1,000 485 4,733 500 485 Chinese 6,501 700 500 1,099 120 500 The randomly sampled train and dev set for labels in level 2 are only those labeled as sexism.
Hardware Specification No The paper discusses evaluating masked language models (MLMs) and large language models (LLMs) but does not provide specific details about the hardware (e.g., GPU models, CPU types) used for training or experimentation.
Software Dependencies No The paper mentions several models such as BERT (Devlin et al. 2019; Cui et al. 2019), RoBERTa (Liu et al. 2019; Cui et al. 2020), DeBERTa (He, Gao, and Chen 2022), Chat GPT (Open AI 2022), Chat GLM (Du et al. 2022), Baichuan (Yang et al. 2023), LLama (Touvron et al. 2023), and Alpaca (Taori et al. 2023). However, it does not specify version numbers for underlying software libraries or programming languages (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For masked language models, we train five respective classifiers for the five labels. During training, we set the random seed to 42, the learning rate to 1e-5, and the batch size to 16 with Adam optimizer. We try epochs varying from 1, 5, 10, 15, 20, 30, and 40.