Tilted Sparse Additive Models
Authors: Yingjie Wang, Hong Chen, Weifeng Liu, Fengxiang He, Tieliang Gong, Youcheng Fu, Dacheng Tao
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical assessments verify the competitive performance of our approach on both synthetic and real data. |
| Researcher Affiliation | Collaboration | 1College of Control Science and Engineering, China University of Petroleum (East China),Qingdao,China 2College of Informatics, Huazhong Agricultural University, China 3Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan, China 4JD Explore Academy, JD.com, Inc., Beijing, China 5Artificial Intelligence and its Applications Institute, School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom 6School of Computer Science and Technology, Xi an Jiaotong University, China 7The University of Sydney, Sydney, Australia. |
| Pseudocode | Yes | Algorithm 1 Prox SVRG for (13) |
| Open Source Code | No | The paper does not provide any specific repository links or explicit statements about the public release of the source code for the methodology described. |
| Open Datasets | Yes | In this section, we evaluate the performance of T-Sp AM on Coronal Mass Ejections (CME) data. The CME data (https://cdaw.gsfc.nasa.gov/CME list/) |
| Dataset Splits | Yes | In all synthetic experiments, we independently generate training dataset, validation dataset and test dataset, where the hyper-parameters t and λ are tuned in grids { 0.1, 0.5, 1, 2} and {10 6, 10 5, 10 4, 10 3, 10 2, 10 1, 1} on validation dataset. |
| Hardware Specification | No | The paper discusses computational efficiency and execution time but does not specify any particular hardware details such as CPU or GPU models used for the experiments. |
| Software Dependencies | No | The paper references algorithms and techniques like Prox-SVRG and random Fourier features but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | In all synthetic experiments, we independently generate training dataset, validation dataset and test dataset, where the hyper-parameters t and λ are tuned in grids { 0.1, 0.5, 1, 2} and {10 6, 10 5, 10 4, 10 3, 10 2, 10 1, 1} on validation dataset. |