Weakly-supervised Discovery of Visual Pattern Configurations
Authors: Hyun Oh Song, Yong Jae Lee, Stefanie Jegelka, Trevor Darrell
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments on the challenging PASCAL VOC dataset, we find the inclusion of our discriminative, automatically detected configurations to outperform all existing state-of-the-art methods. |
| Researcher Affiliation | Academia | University of California, Berkeley *University of California, Davis |
| Pseudocode | No | The paper describes algorithms in paragraph form, such as the greedy algorithm, but does not provide structured pseudocode or algorithm blocks with formal labeling. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or provide links to a code repository for the described methodology. |
| Open Datasets | Yes | In our experiments on the challenging PASCAL VOC dataset, we find the inclusion of our discriminative, automatically detected configurations to outperform all existing state-of-the-art methods. |
| Dataset Splits | No | The paper mentions using the PASCAL test set but does not specify a separate validation set or describe how data was split for validation purposes. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as CPU or GPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper mentions using 'fc7 features from the CNN model [6]' and a 'region based detection framework [13, 29]', but it does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | For discriminative patch discovery, we use K = |P|/2, θ = K/20. For correspondence detection, we discretize the 4D transform space of {x: relative horizontal shift, y: relative vertical shift, s: relative scale, p: relative aspect ratio} with x = 30 px, y = 30 px, s = 1 px/px, p = 1 px/px. |