Resource Democratization: Is Compute the Binding Constraint on AI Research?
Authors: Rebecca Gelles, Veronica Kinoshita, Micah Musser, James Dunham
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We addressed this gap by conducting a large survey of U.S. AI researchers that posed questions about project inputs, outcomes, and challenges. |
| Researcher Affiliation | Academia | Center for Security and Emerging Technology, Georgetown University {rebecca.gelles, ronnie.kinoshita, mrm311, james.dunham}@georgetown.edu |
| Pseudocode | No | The paper describes a survey methodology and its results; it does not present any algorithms or pseudocode. |
| Open Source Code | No | The paper states: 'The full instrument is available in an online appendix 1. 1Available at https://github.com/georgetown-cset/Compute Survey 2022'. This link refers to the survey instrument, not the source code for data analysis or the described methodology. |
| Open Datasets | No | The paper describes a survey study, not one involving traditional datasets for training machine learning models. It uses data sources like Web of Science and Linked In for sampling frames, not as public datasets for model training. |
| Dataset Splits | No | The paper describes a survey study and does not involve training, validation, or test splits of data in the context of machine learning experiments. |
| Hardware Specification | No | The paper discusses survey respondents' compute usage but does not specify the hardware used by the authors for their own data analysis or research. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers used for data analysis or other research activities. |
| Experiment Setup | Yes | The survey included 30-35 close-ended questions and one open-ended question, depending on each respondent s employment experiences and AI projects. Early versions of the survey instrument were refined through a series of cognitive interviews with AI researchers in academia and industry. We created a sampling frame by enumerating individuals who authored a paper at a top AI conference or journal, or worked in industry in an AI-related role. |