Weakly Supervised Regression with Interval Targets

Authors: Xin Cheng, Yuzhou Cao, Ximing Li, Bo An, Lei Feng

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various datasets demonstrate the effectiveness of our proposed method. In this section, we conduct extensive experiments to validate the effectiveness of our proposed limiting method.
Researcher Affiliation Academia 1College of Computer Science, Chongqing University, China 2School of Computer Science and Engineering, Nanyang Technological University, Singapore 3College of Computer Science and Technology, Jilin University, China.
Pseudocode No The paper describes its methods through text and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets Yes Datasets. We conduct experiments on nine datasets, including two computer vision datasets (Age DB (Moschoglou et al., 2017) and IMDB-WIKI (Rothe et al., 2018)), one natural language processing dataset (STS-B (Cer et al., 2017)), and six datasets from the UCI Machine Learning Repository (Dua & Graff, 2017)...
Dataset Splits Yes Then we randomly split each dataset into training, validation, and test sets by the proportions of 60%, 20%, and 20%, respectively.
Hardware Specification No The paper specifies the neural network architectures (e.g., ResNet-50, MLP) and optimizers used, but it does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions software components such as the Adam optimizer, ResNet-50 backbone, and GloVe word embeddings, but it does not specify version numbers for these or any other software libraries or frameworks used in the experiments.
Experiment Setup Yes For the linear model and the MLP model, we use the Adam optimization method (Kingma & Ba, 2015) with the batch size set to 512 and the number of training epochs set to 1,000, and the learning rate for all methods is selected from {10-2, 10-3}. We use the Adam optimizer to train all methods for 100 epochs with an initial learning rate of 10-3 and fix the batch size to 256. We also use the Adam optimizer to train all methods for 10-4 and fix the batch size to 256.