Lifted Proximal Operator Machines
Authors: Jia Li, Cong Fang, Zhouchen Lin4181-4188
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on MNIST and CIFAR-10 datasets testify to the advantages of LPOM. We implement LPOM on fully connected DNNs and test it on benchmark datasets, MNIST and CIFAR-10, and obtain satisfactory results. |
| Researcher Affiliation | Academia | Jia Li, Cong Fang, Zhouchen Lin Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, P. R. China jiali.gm@gmail.com; fangcong@pku.edu.cn; zlin@pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 Solving LPOM; Algorithm 2 Solving (32). |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-sourcing the code for LPOM. |
| Open Datasets | Yes | Experiments on MNIST and CIFAR-10 datasets testify to the advantages of LPOM. For the MNIST dataset, we use 28 28 = 784 raw pixels as the inputs. It includes 60,000 training images and 10,000 test images. http://yann.lecun.com/exdb/mnist/ |
| Dataset Splits | No | For the MNIST dataset, we use 28 28 = 784 raw pixels as the inputs. It includes 60,000 training images and 10,000 test images. The paper mentions training and testing, but no explicit validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | We implement LPOM with MATLAB without optimizing the code. We use the SGD based solver in Caffe (Jia et al. 2014). No version numbers are provided for MATLAB or Caffe. |
| Experiment Setup | Yes | We run LPOM and SGD for 100 epochs with a fixed batch size 100. For LPOM, we simply set µi =20 in (18). For LPOM, we set µi = 100 in (18). For LPOM, we set µi = 20 for all the networks. |