Robust Model Reasoning and Fitting via Dual Sparsity Pursuit
Authors: Xingyu Jiang, Jiayi Ma
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments regarding known and unknown model fitting on synthetic and challenging real datasets have demonstrated the superiority of our method against the stateof-the-art. |
| Researcher Affiliation | Academia | 1Huazhong University of Science and Technology, 2Wuhan University |
| Pseudocode | Yes | We conclude the pseudo code of the implementation of our DSP method in Alg. 1 and Alg. 2. |
| Open Source Code | Yes | Code is released at: https://github.com/Sta Rain J/DSP. |
| Open Datasets | Yes | 8 public datasets [8, 2] are used, and we divide them into two groups including Fund: kusvod2, CPC, TUM, KITTI, T&T; and Homo: homogr, EVD, Hpatch. |
| Dataset Splits | No | The paper describes the synthetic data generation parameters and mentions the use of public datasets for testing, but it does not specify explicit training/validation/test splits for their method. |
| Hardware Specification | Yes | The experiments of RANSAC [19], EAS [18] 2 and our DSP are conducted on a desktop with 4.0 GHz Intel Core i7-6700K CPU and 16GB memory. ... And two deep learning methods are accelerated by NVIDIA TITAN V GPUs. |
| Software Dependencies | No | The paper mentions 'MATLAB code' and 'Ubuntu 16.04' but does not specify version numbers for other ancillary software libraries or solvers used in their method. |
| Experiment Setup | Yes | In DSP, λ and γ are two hyper-parameters. Based on [44], we set λ = 0.005 log(4N) [1, 1, 0.5, 1, 1, 0.5, 0.5, 0.5, 0.1] as default... In addition, we set γ = 0.06 at the beginning, then update it with 0.98γ for each twenty iterations, and constrain γmin = 0.02. Moreover, we set the max iteration as 2k, and stop it if ε = xk xk 1 2 1e 6. As for τ, it controls the number of estimated basis, i.e., r. We set ξ = L(M, xi, ei)/L(M, xi 1, ei 1), and describe its distribution on all real data as in Fig. 3. Based on the best ξ, we set τ = 1.2L(M, xi 1, ei 1) during the estimation of xi. |