L2-Nonexpansive Neural Networks
Authors: Haifeng Qian, Mark N. Wegman
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are divided into three groups to study different properties of L2NNNs. |
| Researcher Affiliation | Industry | Haifeng Qian & Mark N. Wegman IBM Research Yorktown Heights, NY 10598, USA qianhaifeng,wegman@us.ibm.com |
| Pseudocode | No | Not found. The paper describes its methods in prose and mathematical formulas, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our MNIST and CIFAR-10 classifiers are available at http://researcher.watson.ibm.com/group/9298 |
| Open Datasets | Yes | With MNIST and CIFAR-10 classifiers |
| Dataset Splits | Yes | In early-stopping runs, 5000 training images are withheld as validation set and training stops when loss on validation set stops decreasing. |
| Hardware Specification | No | Not found. The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | Not found. The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We use a loss function with three terms, with trade-off hyperparameters γ and ω: L = La + γ Lb + ω Lc (3) |