Position: Measure Dataset Diversity, Don’t Just Claim It

Authors: Dora Zhao, Jerone Andrews, Orestis Papakyriakopoulos, Alice Xiang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our research explores the implications of this issue by analyzing diversity across 135 image and text datasets.
Researcher Affiliation Collaboration 1Stanford University, Stanford, USA 2Sony AI, London, UK 3Technical University of Munich, Munich, Germany 4Sony AI, Seattle, USA.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the methodology described.
Open Datasets Yes Our scrutiny extends to 135 text and image datasets, where we uncover key considerations and offer recommendations for applying measurement theory to their collection. ...To inform our position, we conducted a systematic literature review encompassing 135 datasets presented as being more diverse.
Dataset Splits No The paper discusses the concept of validation as applied to datasets (e.g., "construct validity"), but it does not describe training, validation, or test dataset splits for its own research, which is a systematic literature review and conceptual analysis.
Hardware Specification No The paper describes a systematic literature review and conceptual framework. It does not involve computational experiments that would require hardware specifications.
Software Dependencies No The paper describes a systematic literature review and conceptual framework. It does not specify any software components with version numbers used for its analysis.
Experiment Setup No The paper describes a systematic literature review and conceptual framework. It does not include specific details on experimental setup such as hyperparameters, training configurations, or system-level settings.