Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the Challenges and Opportunities in Generative AI
Authors: Laura Manduchi, Clara Meister, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin
TMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work, our objective is to identify these issues and highlight key unresolved challenges in modern generative AI paradigms that should be addressed to further enhance their capabilities, versatility, and reliability. By identifying these challenges, we aim to provide researchers with insights for exploring fruitful research directions, thus fostering the development of more robust and accessible generative AI solutions. ... This work offers a collection of views and opinions from different communities about these key unresolved challenges in generative AI, with the ultimate goal of guiding future research toward what we perceive are the most critical and promising areas. |
| Researcher Affiliation | Collaboration | 1ETH Zürich; 2UC Irvine; 3University of Tübingen; 4Ruhr-University Bochum; 5RPTU Kaiserslautern Landau; 6TU Wien; 7Hasso Plattner Institute; 8Imperial College London; 9University of Potsdam; 10Johns Hopkins University; 11LMU Munich; 12New York University; 13University of Wisconsin-Madison; 14Meta AI; 15UCLA; 16University of Michigan; 17Mirelo AI; 18University of British Columbia; 19Helmholtz AI; 20TU Munich |
| Pseudocode | No | The paper includes summary tables (e.g., Table 1, Table 2, Table 3) that outline challenges and mitigation avenues but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the methodology described. It refers to 'Reviewed on Open Review: https://openreview.net/forum?id=Ne S9Kj2Jw F' and a Dagstuhl Seminar link, which are not code repositories. |
| Open Datasets | Yes | Datasets: WILDS; Image Net-C Surveys: Shen et al. (2021); Yang et al. (2023b); Li et al. (2023c) ... Datasets: Waterbirds; Colored-MNIST Surveys: Ye et al. (2024) ... Benchmarks: Causal Bench Surveys: Komanduri et al. (2024) ... Datasets: MMIST-CCRCC; GMAI-MMBench Surveys: Shaik et al. (2024) ... Datasets: Long Bench Surveys: Tay et al. (2022a) |
| Dataset Splits | No | This paper is a survey and review of challenges and opportunities in generative AI. It does not conduct its own experiments or define dataset splits; rather, it discusses general practices and findings from other research. Therefore, it does not provide specific dataset split information. |
| Hardware Specification | No | The paper discusses the computational requirements and costs of large-scale generative models and mentions 'a single A100 GPU with 80GB of memory' in the context of other researchers' work (OPTQ by Frantar et al., 2023). However, it does not specify any hardware used for its own research or analysis. |
| Software Dependencies | No | The paper discusses various generative AI models and frameworks like LLMs, diffusion models, GPT-3, PyTorch, and TensorFlow, but does not specify software dependencies with version numbers for its own research or analysis. It is a survey paper and does not involve experimental implementation details. |
| Experiment Setup | No | As a survey paper, this work analyzes existing research and identifies challenges and opportunities in generative AI. It does not present any original experimental work, and therefore, does not include details on experimental setup, hyperparameters, or training configurations. |