Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Position: Generative AI Regulation Can Learn from Social Media Regulation
Authors: Ruth Elisabeth Appel
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this position paper, we argue that the debates on generative AI regulation can be informed by evidence on social media regulation. ... First, we compare and contrast the affordances of generative AI and social media to highlight their similarities and differences. Then, we discuss four specific policy recommendations based on the evolution of social media and their regulation: (1) counter bias and perceptions thereof (e.g., via transparency, oversight boards, researcher access, democratic input), (2) address specific regulatory concerns (e.g., youth wellbeing, election integrity) and invest in trust and safety, (3) promote computational social science research, and (4) take on a more global perspective. |
| Researcher Affiliation | Academia | 1Stanford University, Stanford, CA, United States. Correspondence to: Ruth E. Appel <EMAIL>. |
| Pseudocode | No | The paper is a position paper discussing policy recommendations and comparing the affordances of generative AI and social media. It does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is a position paper discussing policy and does not present new methodology that would typically involve source code release. No statement regarding code availability or a repository link is provided. |
| Open Datasets | No | The paper cites external research that may have used datasets (e.g., 'representative opinion polls', 'large language models were found to output biased opinions (Durmus et al., 2023; Santurkar et al., 2023)'), but it does not use its own dataset or provide access information for any dataset directly associated with the paper's analysis or claims. |
| Dataset Splits | No | This paper is a position paper that does not conduct experiments or use datasets, therefore, no dataset split information is provided. |
| Hardware Specification | No | This paper is a position paper that does not involve experimental work, and therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | This paper is a position paper that does not involve experimental work, and therefore, no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | This paper is a position paper that does not involve experimental work, and therefore, no specific experimental setup details or hyperparameters are mentioned. |