Identifying Model Weakness with Adversarial Examiner
Authors: Michelle Shu, Chenxi Liu, Weichao Qiu, Alan Yuille11998-12006
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on Shape Net object classification. We show that our adversarial examiner can successfully put more emphasis on the weakness of the model, preventing performance estimates from being overly optimistic. |
| Researcher Affiliation | Academia | Michelle Shu, Chenxi Liu,( ) Weichao Qiu, Alan Yuille Johns Hopkins University {mshu1, cxliu}@jhu.edu, {qiuwch, alan.l.yuille}@gmail.com |
| Pseudocode | Yes | Algorithm 1: Adversarial Examiner Procedure |
| Open Source Code | No | The paper does not provide a link or explicit statement about the availability of open-source code for the methodology described in this paper. |
| Open Datasets | Yes | We conduct experiments on visual recognition of objects in the Shape Net dataset (Chang et al. 2015), which contains 55 classes and 51190 instances. |
| Dataset Splits | No | The paper mentions 'For each class, we choose one 3D object in the validation set that has the highest post-softmax probability on the true class.' but does not provide specific split percentages, sample counts, or clear predefined split information for reproducibility. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided in the paper. |
| Software Dependencies | No | The paper mentions 'Blender software for rendering', 'Res Net34', 'Alex Net', 'Adam optimizer', and 'Bayesian Optimization package1' with a link, but does not provide specific version numbers for these or other key software components, nor does it specify the ML framework used. |
| Experiment Setup | Yes | The Res Net34 model is trained with learning rate of 0.005, and Alex Net model with 0.001, both with Adam optimizer (Kingma and Ba 2014) for 40 epochs. ... We set the learning rate to 0.001 and batch size to 32, and use Adam optimizer (Kingma and Ba 2014) to update model parameters. |