Learning with Stochastic Orders
Authors: Carles Domingo-Enrich, Yair Schiff, Youssef Mroueh
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide a min-max framework for learning with stochastic orders and validate it experimentally on synthetic and high-dimensional image generation, with promising results. |
| Researcher Affiliation | Collaboration | Carles Domingo-Enrich New York University cd2754@nyu.edu Yair Schiff Cornell University yzs2@cornell.edu Youssef Mroueh IBM Research AI mroueh@us.ibm.com |
| Pseudocode | Yes | Algorithm 1 given in App. F summarizes learning with the surrogate VDC. Algorithm 2 given in App. F summarizes learning with the surrogate CT distance. |
| Open Source Code | Yes | Code to reproduce experimental results is available here. |
| Open Datasets | Yes | In our experiments, we use following publicly available data: (1) the CIFAR-10 (Krizhevsky & Hinton, 2009) dataset, released under the MIT license, and (2) the Github icon silhouette, which was copied from https://github.com/CW-Huang/CP-Flow/blob/main/imgs/github.png. |
| Dataset Splits | Yes | We use the CIFAR-10 training data and split it as 95% training and 5% validation. |
| Hardware Specification | Yes | Training the baseline g0 and g with the surrogate VDC was done on a compute environment with 1 CPU and 1 A100 GPU. |
| Software Dependencies | No | Our experiments rely on various open-source libraries, including pytorch (Paszke et al., 2019) (license: BSD) and pytorch-lightning (Falcon et al., 2019) (Apache 2.0). The paper mentions software names but does not provide specific version numbers for them. |
| Experiment Setup | Yes | We set λ in Equation (11) to 10... We use ADAM optimizers (Kingma & Ba, 2015) for both networks, learning rates of 1e 4, and a batch size of 64. |