Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Reliable and Responsible Foundation Models

Authors: Xinyu Yang, Junlin Han, Rishi Bommasani, Jinqi Luo, Wenjie Qu, Wangchunshu Zhou, Adel Bibi, Xiyao Wang, Jaehong Yoon, Elias Stengel-Eskin, Shengbang Tong, Lingfeng Shen, Rafael Rafailov, Runjia Li, Zhaoyang Wang, Yiyang Zhou, Chenhang Cui, Yu Wang, Wenhao Zheng, Huichi Zhou, Jindong Gu, Zhaorun Chen, Peng Xia, Tony Lee, Thomas P Zollo, Vikash Sehwag, Jixuan Leng, Jiuhai Chen, Yuxin Wen, Huan Zhang, Zhun Deng, Linjun Zhang, Pavel Izmailov, Pang Wei Koh, Yulia Tsvetkov, Andrew Gordon Wilson, Jiaheng Zhang, James Zou, Cihang Xie, Hao Wang, Philip Torr, Julian McAuley, David Alvarez-Melis, Florian Tramรจr, Kaidi Xu, Suman Jana, Chris Callison-Burch, Rene Vidal, Filippos Kokkinos, Mohit Bansal, Beidi Chen, Huaxiu Yao

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This survey addresses the reliable and responsible development of foundation models. We explore critical issues, including bias and fairness, security and privacy, uncertainty, explainability, and distribution shift. Our research also covers model limitations, such as hallucinations, as well as methods like alignment and Artificial Intelligence-Generated Content (AIGC) detection. For each area, we review the current state of the field and outline concrete future research directions. Additionally, we discuss the intersections between these areas, highlighting their connections and shared challenges. We hope our survey fosters the development of foundation models that are not only powerful but also ethical, trustworthy, reliable, and socially responsible.
Researcher Affiliation Academia 1Carnegie Mellon University 2University of Oxford 3Stanford University 4University of Pennsylvania 5National University of Singapore 6ETH Zurich 7University of Maryland 8UNC Chapel Hill 9New York University 10Johns Hopkins University 11University of California, San Diego 12Imperial College London 13University of Chicago 14Columbia University 15Princeton University 16University of Montreal & Mila 17Rutgers University 18University of Washington 19University of California, Santa Cruz 20Harvard University 21Drexel University 22University College London
Pseudocode No The paper is a survey and does not present new algorithms or methods in structured pseudocode or algorithm blocks. It discusses methodologies conceptually or through mathematical expressions.
Open Source Code No The paper is a survey and does not present new methodologies with corresponding open-source code release statements or links.
Open Datasets No The paper is a survey that discusses various existing datasets in the context of other research papers. However, it does not introduce a new dataset or provide access information (links, DOIs, specific citations) for a dataset created or used directly by the authors for their own experiments.
Dataset Splits No The paper is a survey and does not conduct original experiments requiring dataset splits. It discusses methodologies and findings from other research, which may involve dataset splits, but no splits are provided by the authors for their own work.
Hardware Specification No The paper is a survey that reviews existing work. It does not describe any experiments conducted by the authors that would require specific hardware specifications.
Software Dependencies No The paper is a survey and does not describe any implementation details or software dependencies for its own work. It discusses software and tools used in other research papers conceptually.
Experiment Setup No The paper is a survey of existing research and does not present original experimental work requiring detailed setup, hyperparameters, or training configurations.