Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Large Language Models can be Guided to Evade AI-generated Text Detection
Authors: Ning Lu, Shengcai Liu, Rui He, Yew-Soon Ong, Qi Wang, Ke Tang
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across three real-world tasks demonstrate that SICO significantly outperforms the paraphraser baselines and enables GPT-3.5 to successfully evade six detectors, decreasing their AUC by 0.5 on average. Furthermore, a comprehensive human evaluation show that the SICO-generated text achieves human-level readability and task completion rates, while preserving high imperceptibility. |
| Researcher Affiliation | Academia | The authors are affiliated with universities such as Southern University of Science and Technology, Hong Kong University of Science and Technology, and Nanyang Technological University, and public research institutions like the Agency for Science, Technology and Research (A*STAR). All listed institutions are academic or public research bodies, not private corporations. |
| Pseudocode | Yes | Algorithm 1 Substitution-based in-context example optimization (SICO) Algorithm 2 Greedy text optimization (Greedy OPT) Algorithm 3 Feature selections |
| Open Source Code | Yes | The code is publicly available at https://github.com/ColinLu50/Evade-GPT-Detector. |
| Open Datasets | Yes | For academic writing, we employ Wikipedia paragraphs from SQuAD dataset (Rajpurkar et al., 2016) as human-written text. For open-ended question answering, we sample questions from Eli5 (Fan et al., 2019) dataset... human-written reviews from Yelp dataset (Zhang et al., 2015)... Human Chat GPT Comparison Corpus (HC3) dataset (Guo et al., 2023)... GPT2 output dataset (Solaiman et al., 2019). |
| Dataset Splits | Yes | For each task, we collect 200 examples from GPT-3.5 (called original AI-generated text) and 200 human-written examples from corresponding dataset... We set |Xeval| = 32, K = 8... For each task, we evaluate AUC score using 200 human-written text and 200 original or paraphrased AIgenerated text... we fine-tuning a RoBERTa model with 5k SICO examples and 5k human-written examples. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments. It mentions LLMs like GPT-3.5 and GPT2-medium, and computational resources, but no specific GPU, CPU, or other hardware model numbers are provided. |
| Software Dependencies | No | The paper mentions using specific models like 'gpt-3.5-turbo-0301', 'RoBERTa model', 'GPT2-medium', and tools like 'Stanford POS Tagger' and 'WordNet'. However, it does not provide specific version numbers for general software libraries or frameworks (e.g., Python, PyTorch, TensorFlow) that would be needed to reproduce the experiment. |
| Experiment Setup | Yes | We set |Xeval| = 32, K = 8, N = 6, and use GPT-3.5, specifically gpt-3.5-turbo-0301, as the LLM, where the inference parameters are kept in default... We choose the best evasion performance parameter setting from the original paper (Krishna et al., 2023), which is 60 for lexical diversity and 60 for re-ordering. And we set sampling temperature to 0.75... We use z-score implementation of Detect GPT and set sample number to 100 and replacement ratio to 0.3. |