Robustness to Adversarial Perturbations in Learning from Incomplete Data

Authors: Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have also tested our method, denoted by SSDRL, via extensive computer experiments on datasets such as MNIST [16], SVHN [17], and CIFAR-10 [18]. When equipped with deep neural networks, SSDRL outperforms rivals such as Pseudo-Labeling (PL) [19] and the supervised DRL of [9] on all the datasets.
Researcher Affiliation Collaboration Amir Najafi Department of Computer Engineering Sharif University of Technology Tehran, Iran najafy@ce.sharif.edu; Shin-ichi Maeda Preferred Networks, Inc. Tokyo, Japan ichi@preferred.jp; Masanori Koyama Preferred Networks, Inc. Tokyo, Japan masomatics@preferred.jp; Takeru Miyato Preferred Networks, Inc. Tokyo, Japan miyato@preferred.jp
Pseudocode Yes Algorithm 1 Stochastic Gradient Descent for SSDRL
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes MNIST [16], SVHN [17], and CIFAR-10 [18]
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup No The paper states "Architecture and other specifications about our DNNs are explained in details in Appendix A." but the provided text does not contain specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings.