On Fenchel Mini-Max Learning
Authors: Chenyang Tao, Liqun Chen, Shuyang Dai, Junya Chen, Ke Bai, Dong Wang, Jianfeng Feng, Wenlian Lu, Georgiy Bobashev, Lawrence Carin
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate the utility of the proposed approach, we consider learning unnormalized statistical models, nonparametric density estimation and training generative models, with encouraging empirical results presented. 5 Experiments To validate the proposed FML framework and benchmark it against state-of-the-art methods, we consider a wide range of experiments, using synthetic and real-world datasets. |
| Researcher Affiliation | Collaboration | Chenyang Tao1, Liqun Chen1, Shuyang Dai1, Junya Chen1,2, Ke Bai1, Dong Wang1, Jianfeng Feng3, Wenlian Lu2, Georgiy Bobashev4, Lawrence Carin1 1 Electrical & Computer Engineering, Duke University, Durham, NC, USA 2 School of Mathematical Sciences, Fudan University, Shanghai, China 3 ISTBI, Fudan University, Shanghai, China 4 RTI International, Research Triangle Park, NC, USA |
| Pseudocode | Yes | Algorithm 1 Fenchel Mini-Max Learning |
| Open Source Code | Yes | our code is from https: //www.github.com/chenyang-tao/FML. |
| Open Datasets | Yes | In addition to the toy examples, we also evaluate the proposed FML on real datasets from the UCI data repository [17]. Image datasets We applied FML-training to a number of popular image datasets including MNIST, Celeb A, and Cifar10. Natural language models We further apply FML to the learning of natural language models. The following two benchmark datasets are considered: (i) EMNLP WMT news [26] and (ii) MS COCO [43]. |
| Dataset Splits | Yes | To evaluate model performance, we randomly split the data into ten folds, and use seven of them for training and three of them for evaluation. |
| Hardware Specification | Yes | All experiments are implemented with Tensorflow and executed on a single NVIDIA TITAN X GPU. |
| Software Dependencies | No | The paper mentions 'Tensorflow' as the software used but does not specify a version number. 'All experiments are implemented with Tensorflow and executed on a single NVIDIA TITAN X GPU.' |
| Experiment Setup | No | Details of the experimental setup are provided in the SUPP, due to space limits, and our code is from https: //www.github.com/chenyang-tao/FML. |