Predicting the Politics of an Image Using Webly Supervised Data
Authors: Christopher Thomas, Adriana Kovashka
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally show that our method outperforms numerous baselines on both a large held-out webly supervised test set, and the set of crowdsourced annotations. |
| Researcher Affiliation | Academia | Christopher Thomas Adriana Kovashka Department of Computer Science University of Pittsburgh Pittsburgh, PA 15213 {chris,kovashka}@cs.pitt.edu |
| Pseudocode | No | The paper describes its method verbally and with a diagram, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our dataset, code, and additional materials are available online for download here: http://www.cs.pitt.edu/ chris/politics |
| Open Datasets | Yes | We propose and make available1 a very large dataset of biased images with paired text, and a large amount of diverse crowdsourced annotations regarding political bias. Our dataset, code, and additional materials are available online for download here: http://www.cs.pitt.edu/ chris/politics |
| Dataset Splits | No | The paper mentions training on a 'train set' and evaluating on a 'held-out test set' (75,148 images), but it does not explicitly state a separate validation dataset split with specific numbers or percentages. |
| Hardware Specification | No | The paper does not specify the exact hardware components (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like PyTorch, Doc2Vec (implicitly Gensim), Text Boxes++, and Symspell, but it does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We train all models using Adam [34], with learning rate of 1.0e-4 and minibatch size of 64 images. We use cross-entropy loss and apply class-weight balancing to correct for slight data imbalance between L/R. We use an image size of 224x224 and random horizontal flipping as data augmentation. We use Xavier initialization [21] for non-pretrained layers. |