Partial-Label Regression
Authors: Xin Cheng, Deng-Bao Wang, Lei Feng, Min-Ling Zhang, Bo An
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments validate the effectiveness of our proposed methods. We prove that the latter two methods are model-consistent and provide convergence analyses. Finally, we conducted extensive experiments to demonstrate the effectiveness of our proposed methods. |
| Researcher Affiliation | Academia | 1College of Computer Science, Chongqing University, Chongqing, China 2School of Computer Science and Engineering, Southeast University, Nanjing, China 3School of Computer Science and Engineering, Nanyang Technological University, Singapore |
| Pseudocode | No | Not found. The paper describes methods but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | Not found. The paper does not provide any statement or link for open-source code. |
| Open Datasets | Yes | We use seven widely used benchmark regression datasets including Abalone, Airfoil, Auto-mpg, Housing, Concrete, Power-plant, and Cpu-act. All of these datasets can be downloaded from the UCI Machine Learning Repository1. https://archive.ics.uci.edu/ |
| Dataset Splits | Yes | For each dataset, we randomly split the original dataset into training, validation, and test sets by the proportions of 60%, 20%, and 20%, respectively. |
| Hardware Specification | No | Not found. The paper does not provide specific hardware details (e.g., GPU/CPU models) used for running its experiments, only mentioning the types of models trained (linear and MLP). |
| Software Dependencies | No | Not found. The paper mentions the Adam optimization method and various loss functions but does not specify any software names with version numbers for reproducibility. |
| Experiment Setup | Yes | The MLP model is a five-layer (d-20-30-10-1) neural network with the Re LU activation function. For both the linear model and the MLP model, we use the Adam optimization method (Kingma and Ba 2015) with the batch size set to 256 and the number of training epochs set to 1000. For AVGL-Huber and AVGV-Huber, the threshold value of the Huber loss is selected from {1, 5}. For our PIDent method, β1 is fixed at 0.5 and β2 is selected from {10, 100, 500, 1000, 10000}. For all the methods, the learning rate is selected from {0.01, 0.001}. |