Regression with Cost-based Rejection
Authors: Xin Cheng, Yuzhou Cao, Haobo Wang, Hongxin Wei, Bo An, Lei Feng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | Xin Cheng1 Yuzhou Cao2 Haobo Wang3 Hongxin Wei4 Bo An2 Lei Feng2 1College of Computer Science, Chongqing University, China 2School of Computer Science and Engineering, Nanyang Technological University, Singapore 3School of Software Technology, Zhejiang University, China 4Department of Statistics and Data Science, Southern University of Science and Technology, China |
| Pseudocode | No | The paper provides mathematical formulations and theoretical analyses but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to code repositories or explicit statements about the release of their implementation code. |
| Open Datasets | Yes | We conduct experiments on seven datasets, including one computer vision dataset (Age DB [35]), one healthcare dataset (Breast Path Q [33]), and five datasets from the UCI Machine Learning Repository [13] (Abalone, Airfoil, Auto-mpg, Housing and Concrete). |
| Dataset Splits | Yes | For each dataset, we randomly split the original dataset into training, validation, and test sets by the proportions of 60%, 20%, and 20%, respectively. |
| Hardware Specification | No | The paper describes the models used (ResNet-50, MLP, Linear model) and optimizers (Adam) but does not specify any hardware details such as GPU models, CPU types, or cloud computing instances. |
| Software Dependencies | No | The paper mentions general software components like 'Adam optimizer' and 'Res Net-50', but does not list specific version numbers for any programming languages, libraries, frameworks, or solvers. |
| Experiment Setup | Yes | We use the Adam optimizer to train our method for 100 epochs where the Slow Start is set to 40 epochs, the initial learning rate of 10 3 and fix the batch size to 256. For both the linear model and the MLP model, we use the Adam optimization method with the batch size set to 1024 and the number of training epochs set to 1000 where the Slow-Start is set to 200 epochs. The learning rate for all UCI benchmark datasets is selected from {10 1, 10 2, 10 3}. |