Classification with Rejection: Scaling Generative Classifiers with Supervised Deep Infomax
Authors: Xin Wang, Siu Ming Yiu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that SDIM with rejection policy can effectively reject illegal inputs, including adversarial examples and out-of-distribution samples. |
| Researcher Affiliation | Academia | Xin Wang and Siu Ming Yiu The University of Hong Kong {xwang, smyiu}@cs.hku.hk |
| Pseudocode | No | The paper describes the framework and methods in text and diagrams (Figure 1), but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to 'open sourced code' for a baseline model (ABS) in a footnote, but does not provide concrete access to the source code for the methodology (SDIM) described in this paper. |
| Open Datasets | Yes | We evaluate the effectiveness of the rejection policy of SDIM on four image datasets: MNIST, Fashion MNIST (both resized to 32 32 from 28 28); and CIFAR10, SVHN. |
| Dataset Splits | No | The paper mentions 'training set' and 'test sets' but does not explicitly provide details about training/validation/test dataset splits (percentages, counts, or specific split methodology). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Throughout our experiments, we use α = β = γ = 1 in the loss function. We set d = 64 in all our experiments. For encoder of SDIM, we use Res Net [He et al., 2016] on 32 32 with a stack of 8n + 2 layers, and 4 filter sizes {32, 64, 128, 256}. The architecture is summarized as: output map size 32 32 16 16 8 8 4 4 # layers 1 + 2n 2n 2n 2n # filters 32 64 128 256 |