SafeWorld: Geo-Diverse Safety Alignment

Authors: Da Yin, Haoyi Qiu, Kung-Hsiang Huang, Kai-Wei Chang, Nanyun Peng

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluations reveal that current LLMs struggle to meet these criteria. To enhance LLMs alignment with geo-diverse safety standards, we synthesize helpful preference pairs for Direct Preference Optimization (DPO) alignment training. The preference pair construction aims to encourage LLMs to behave appropriately and provide precise references to relevant cultural norms and policies when necessary. Our trained SAFEWORLDLM outperforms all competing models, including GPT-4o on all the three evaluation dimensions by a large margin. Global human evaluators also note a nearly 20% higher winning rate in helpfulness and harmfulness evaluation.
Researcher Affiliation Collaboration Da Yin UCLA da.yin@cs.ucla.edu Haoyi Qiu UCLA haoyiqiu@cs.ucla.edu Kung-Hsiang Huang Salesforce AI Research kh.huang@salesforce.com Kai-Wei Chang UCLA kwchang@cs.ucla.edu Nanyun Peng UCLA violetpeng@cs.ucla.edu
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: Yes, we provide the details in 3, 4, 5.
Open Datasets Yes We introduce SAFEWORLD, the first geo-diverse safety alignment evaluation benchmark, focusing on cultural and legal safety ( 3). SAFEWORLD evaluates an LLM s ability to generate helpful, safe, and appropriate responses in a global context. ... The final DPO training dataset SAFEWORLDALIGN contains 45,746 preference pairs: 26,382 for Negative Category 1 and 19,364 for Negative Category 2. We refer to our alignment models as SAFEWORLDLM.
Dataset Splits Yes The final DPO training dataset SAFEWORLDALIGN contains 45,746 preference pairs: 26,382 for Negative Category 1 and 19,364 for Negative Category 2. ... The final evaluation set consists of 2,775 human-verified queries, while the remaining queries sreve as raw training data, detailed in 5.2.
Hardware Specification Yes Consistent with the handbook s guidelines, we conduct training with 4 NVIDIA A100 80GB GPUs for one epoch using a batch size of 32, a learning rate of 5 × 10 −7, a β value of 0.01 in the DPO loss function, and a warmup rate of 0.1.
Software Dependencies Yes Following the open-source LLM alignment method outlined in the Huggingface Alignment Handbook [37], we employ the DPO training on top of an initial reference policy, Zephyr-7B-SFT-Full, an already supervised fine-tuned (SFT) model. ... Detailed information about the model versions can be found in Table 6.
Experiment Setup Yes Consistent with the handbook s guidelines, we conduct training with 4 NVIDIA A100 80GB GPUs for one epoch using a batch size of 32, a learning rate of 5 × 10 −7, a β value of 0.01 in the DPO loss function, and a warmup rate of 0.1.