Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing

Authors: Nataniel Ruiz, Sarah Bargal, Cihang Xie, Kate Saenko, Stan Sclaroff

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as Would your classification still be correct if the object were viewed from the top? or Would your classification still be correct if the object were partially occluded by another object? . Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that Conv Next is more robust to pose and scale variations than Swin, that Conv Next generalizes better to our simulated domain and that Swin handles partial occlusion better than Conv Next. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions.
Researcher Affiliation Collaboration Nataniel Ruiz Boston University nruiz9@bu.edu Sarah Adel Bargal Georgetown University sarah.bargal@georgetown.edu Cihang Xie University of California Santa Cruz cixie@ucsc.edu Kate Saenko Boston University MIT-IBM Watson AI Lab saenko@bu.edu Stan Sclaroff Boston University sclaroff@bu.edu
Pseudocode No No pseudocode or algorithm blocks were found within the paper.
Open Source Code Yes Project page: https://counterfactualsimulation.github.io. Additionally, the Ethics Checklist item 3a states: 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]'
Open Datasets Yes We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io. Naturalistic Variation Object Dataset (NVD), a dataset containing 272k images of 92 object models with 27 HDRI skybox lighting environments in a kitchen scene with 5 subsets of naturalistic scene variations: object pose, object scale, 360 panoramic camera rotation, top-to-frontal object view and occlusion with different objects. We release this dataset to the public for use in benchmarking and architecture comparison.
Dataset Splits No The paper does not provide train/validation/test splits for the NVD dataset, which is used for evaluation. It mentions using 'Image Net-1 validation set' for pre-trained models' initial accuracies, but this is an external dataset's validation split, not a split of their own experimental data for reproduction. The NVD is used in its 'entirety' for testing.
Hardware Specification Yes We use two Ge Force RTX 2080 GPU to perform all experiments.
Software Dependencies No The paper mentions using the 'MIT Three DWorld (TDW) [19] platform' and 'official open-sourced code for both models' (ConvNext and Swin), but specific version numbers for these software components are not provided.
Experiment Setup No The paper states: 'All PCCP metrics in this section are computed using top-5 predictions for greater stability of results. We plot standard deviation error bars for the counterfactual study figures using bootstrap resampling (100 resamples).' While these are experimental details, they do not include specific hyperparameters (e.g., learning rate, batch size) or training configuration details, as the models used were pre-trained.