RankUp: Boosting Semi-Supervised Regression with an Auxiliary Ranking Classifier
Authors: Pin-Yen Huang, Szu-Wei Fu, Yu Tsao
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate Rank Up s performance across various tasks. The experimental settings are described in Section 4.1. The main results for Rank Up under different label configurations are presented in Section 4.2, while Section 4.3 provides additional results on audio and text datasets. |
| Researcher Affiliation | Collaboration | Pin-Yen Huang Academia Sinica Taipei, Taiwan pyhuang97@gmail.com Szu-Wei Fu NVIDIA Taipei, Taiwan szuweif@nvidia.com Yu Tsao Academia Sinica Taipei, Taiwan yu.tsao@citi.sinica.edu.tw |
| Pseudocode | Yes | Algorithm 1 Auxiliary Ranking Classifier (with Fix Match) |
| Open Source Code | Yes | Our code and log data are opensourced at https://github.com/pm25/semi-supervised-regression. |
| Open Datasets | Yes | To simulate the semi-supervised setting, we randomly sample a portion of the dataset as labeled data, treating the remainder as unlabeled. To evaluate performance, we use three diverse datasets: UTKFace [37], an image age estimation dataset; BVCC [8], an audio quality assessment dataset; and Yelp Review [1], a text sentiment analysis (opinion mining) dataset. We open-source the train-test splits used for conducting the experiments in this paper at https://github.com/pm25/regression-datasets. |
| Dataset Splits | Yes | If the dataset provides a pre-defined train-eval-test split, we utilize the training split to train the model and evaluate its performance on the evaluation or test split. If the dataset does not provide such a split, we randomly sample 80% of the data as the training set and the remaining 20% as the test set. |
| Hardware Specification | Yes | All experiments reported in this paper were conducted using an Nvidia Titan XP with 12 GB of VRAM and an Nvidia Ge Force RTX 2080 Ti, also equipped with 12 GB of VRAM. |
| Software Dependencies | No | The paper does not explicitly list software dependencies with version numbers like 'PyTorch 1.9' or 'CUDA 11.1'. It mentions 'USB [26] codebase' and specific models but no specific software versions for replication. |
| Experiment Setup | Yes | The training and testing details of our experiments are outlined in Appendix A.12. In this section, we list the hyperparameters used in each experimental setting presented in the paper. Table 8 provides the common hyperparameters for the base models... Specific hyperparameter configurations for each semi-supervised regression method are detailed in Table 9. |