Angular Visual Hardness

Authors: Beidi Chen, Weiyang Liu, Zhiding Yu, Jan Kautz, Anshumali Shrivastava, Animesh Garg, Animashree Anandkumar

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate this score with an indepth and extensive scientific study, and observe that CNN models with the highest accuracy also have the best AVH scores.
Researcher Affiliation Collaboration 1Rice University 2Georgia Institute of Technology 3NVIDIA 4University of Toronto 5Vector Institute, Toronto 6Caltech.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating the release of open-source code for the described methodology.
Open Datasets Yes We conduct expeirments on the Vis DA-17 (Peng et al., 2017) dataset which is a widely used major benchmark for domain adaptation in image classification. The dataset contains a total number of 152, 409 2D synthetic images from 12 categories in the source training set, and 55, 400 real images from MS-COCO (Lin et al., 2014) with the same set of categories as the target domain validation set.
Dataset Splits Yes We split all the validation images into 5 bins, [0.0, 0.2], [0.2, 0.4], [0.4, 0.6], [0.6, 0.8], [0.8, 1.0], based on their HSF respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers required to reproduce the experiments.
Experiment Setup Yes For consistency, we train all models for 90 epochs and decay the initial learning rate by a factor of 10 every 30 epochs. The initial learning rate for Alex Net and VGG-19 is 0.01 and for Dense Net-121 and Res Net-50 is 0.1.