Learning Bounds for Risk-sensitive Learning

Authors: Jaeho Lee, Sejun Park, Jinwoo Shin

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate the practical implications of the proposed bounds via exploratory experiments on neural networks.
Researcher Affiliation Academia Jaeho Lee Sejun Park Jinwoo Shin Korea Advanced Institute of Science and Technology (KAIST) School of Electrical Engineering, Graduate School of AI
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Methods are described in prose and mathematical formulations.
Open Source Code Yes Code. Available at https://github.com/jaeho-lee/oce.
Open Datasets Yes In our experiments on CIFAR-10 [29] with Res Net18 [24], we find that batch-based SVP indeed outperforms batch-based CVa R minimization (see Section 4).
Dataset Splits No The paper mentions 'test and train CVa R' but does not explicitly describe a validation set or how it was used in terms of split percentages or counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions 'PyTorch default learning rate' but does not specify a version number for PyTorch or any other software dependency.
Experiment Setup Yes As a model, we use Res Net18 [24]. As an optimizer, we use Adam with weight decay [34] with a batch size 100 and Py Torch default learning rate. For CVa R, we have experimented with α = {0.2, 0.4, 0.6, 0.8}. For batch-SVP, we have simply tested over λ = {0.5, 1.0}.