Long-Term Trends in the Public Perception of Artificial Intelligence
Authors: Ethan Fast, Eric Horvitz
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Analyses of text corpora over time can reveal trends in beliefs, interest, and sentiment about a topic. We focus on views expressed about artificial intelligence (AI) in the New York Times over a 30-year period. We present a set of measures that captures levels of engagement, measures of pessimism and optimism, the prevalence of specific hopes and concerns, and topics that are linked to discussions about AI over decades. We find that discussion of AI has increased sharply since 2009, and that these discussions have been consistently more optimistic than pessimistic. |
| Researcher Affiliation | Collaboration | Ethan Fast, Eric Horvitz ethaen@stanford.edu, horvitz@microsoft.com |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing its source code or links to a code repository for the methodology described. |
| Open Datasets | No | The paper states it uses data from the New York Times public API ('full set of articles published by the New York Times between January 1986 and May 2016 – more than 3 million articles in total') and processes it into 'more than 8000 paragraphs'. While the API is public, the paper does not provide access to their specific processed and annotated dataset, nor does it cite it as a formally established public dataset with access details (link, DOI, or specific citation with author/year for the processed corpus). |
| Dataset Splits | No | The paper mentions validating a classifier in the 'External validity' section ('In validation, we observe precision of 0.8 on a sample of 100 Reddit posts annotated with ground truth') but does not specify the train/validation/test splits for its main analysis dataset or for the classifier itself (beyond the 100 posts for validation of precision). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run its analyses or classifier training. |
| Software Dependencies | No | The paper mentions the use of 'Beautiful Soup python package' but does not provide its version number or any other specific versioned software dependencies for replication. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size) or detailed training configurations for any models (e.g., the logistic regression classifier) mentioned. |