Noisy Dual Principal Component Pursuit
Authors: Tianyu Ding, Zhihui Zhu, Tianjiao Ding, Yunchen Yang, Rene Vidal, Manolis Tsakiris, Daniel Robinson
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper extends the global optimality and convergence theory of DPCP to the case of data corrupted by noise, and further demonstrates its robustness using synthetic and real data. |
| Researcher Affiliation | Academia | 1Department of Applied Mathematics & Statistics, Johns Hopkins University, USA 2Mathematical Institute for Data Science, Johns Hopkins University, USA 3School of Information Science and Technology, Shanghai Tech University, China |
| Pseudocode | Yes | Algorithm 1 DPCP-PSGM for (3) |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | We use the 3D point clouds from the KITTI dataset (Geiger et al., 2013). In addition to the 7 frames annotated in Zhu et al. 2018a, we further annotate 131 frames. Each point cloud contains around 105 points with approximately 50% outliers. |
| Dataset Splits | No | The paper mentions tuning parameters on a 'randomly selected training set of 13 frames' and using 'the rest of the frames for evaluation', but does not explicitly define a separate validation set for hyperparameter tuning. |
| Hardware Specification | Yes | Experiments done on a laptop with Intel i7-6700HQ @ 2.6GHz CPU, 16GB 2133MHz DDR4 Memory. |
| Software Dependencies | No | The paper mentions parameters for algorithms (e.g., 'The λ of ℓ2,1RPCA is set to 1.92 M', 'µmin for DPCP-PSGD is set to 10 9'), but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We tune the parameters of the algorithms on a randomly selected training set of 13 frames and use the rest of the frames for evaluation. Each method is tuned to achieve an optimal error and then re-tuned to be as fast as possible without exceeding 5% of that error. The λ of ℓ2,1RPCA is set to 1.92 M , the τ of DPCP-d is set to 2.76 N+M , µmin for DPCP-PSGD is set to 10 9, and the relative convergence accuracy, wherever applicable, is set to 10 6. |