Efficient Algorithms for General Isotone Optimization
Authors: Xiwen Wang, Jiaxi Ying, José Vinícius de M. Cardoso, Daniel P. Palomar8575-8583
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our algorithm and state-of-the-art methods with experiments involving both synthetic and real-world data. The experimental results demonstrate that our algorithm is more efficient by one to four orders of magnitude than the state-of-the-art methods. |
| Researcher Affiliation | Academia | Xiwen Wang, Jiaxi Ying, Jos e Vin ıcius de M. Cardoso, Daniel P. Palomar The Hong Kong University of Science and Technology {xwangew, jx.ying, jvdmc}@connect.ust.hk, palomar@ust.hk |
| Pseudocode | Yes | Algorithm 1: Sequential block merging (SBM). |
| Open Source Code | Yes | The code is available in https://github.com/Xiwen1997/Isotone Optimization. |
| Open Datasets | Yes | To illustrate the practicality of our method in real-world applications, we use the Adult data set, available from the UCI Machine Learning repository. |
| Dataset Splits | No | The paper mentions using 'randomly generated data sets, with the initial violating rate around 20 50%' for synthetic data, and the 'Adult data set' for real data, but does not provide specific training, validation, or test split percentages or sample counts. |
| Hardware Specification | No | The paper does not specify any hardware components (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions benchmark software like 'isotone', 'quadprog', 'IRP', and 'IPM' but does not provide version numbers for these or any other software dependencies used for their implementation. |
| Experiment Setup | Yes | We set λ = 20, ϵ = 0.1, p = 302, and step size η = 5 × 10−4. |