Introspective Classification with Convolutional Nets
Authors: Long Jin, Justin Lazarow, Zhuowen Tu
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on benchmark datasets including MNIST, CIFAR10, and SVHN using state-of-the-art CNN architectures, and observe improved classification results. |
| Researcher Affiliation | Academia | Long Jin UC San Diego longjin@ucsd.edu Justin Lazarow UC San Diego jlazarow@ucsd.edu Zhuowen Tu UC San Diego ztu@ucsd.edu |
| Pseudocode | Yes | Algorithm 1 Outline of the reclassification-by-synthesis algorithm for discriminative classifier training. |
| Open Source Code | No | The paper does not provide concrete access to its own source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on three standard benchmark datasets, including MNIST, CIFAR-10 and SVHN. |
| Dataset Splits | Yes | We use the standard MNIST [24] dataset, which consists of 55, 000 training, 5, 000 validation and 10, 000 test samples. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of 'SGD optimizer' and 'Adam optimizer [17]' but does not provide specific version numbers for these or other software libraries/dependencies. |
| Experiment Setup | Yes | In our experiments, for the reclassification step, we use the SGD optimizer with mini-batch size of 64 (MNIST) or 128 (CIFAR-10 and SVHN) and momentum equal to 0.9; for the synthesis step, we use the Adam optimizer [17] with momentum term β1 equal to 0.5. |