Task-Based Learning via Task-Oriented Prediction Network with Applications in Finance

Authors: Di Chen, Yada Zhu, Xiaodong Cui, Carla Gomes

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the performance of TOPNet on two real-world financial prediction tasks, revenue surprise forecasting and credit risk modeling. The experimental results demonstrate that TOPNet significantly outperforms both traditional modeling with standard losses and modeling with hand-crafted heuristic differentiable surrogate losses.
Researcher Affiliation Collaboration Di Chen 1 , Yada Zhu2 , Xiaodong Cui2 and Carla P. Gomes1 1Cornell University 2IBM T. J. Watson Research Center di@cs.cornell.edu, {yzhu, cuix}@us.ibm.com, gomes@cs.cornell.edu
Pseudocode Yes Algorithm 1 End-to-End learning process for TOPNet
Open Source Code No Due to business confidentiality, we are not allowed to share the datasets.
Open Datasets No Due to business confidentiality, we are not allowed to share the datasets.
Dataset Splits Yes We split the whole dataset chronologically into training set (01-01-2004 to 06-30-2015, 3,267,584 data points), validation set (07-01-2015 to 06-302017, 465,383 data points) and test set (07-01-2017 to 0630-2019, 421,225 data points) to validate the performance of models.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. It only mentions the training process details like optimizer and batch size.
Software Dependencies No The paper mentions using "Adam optimizer [Kingma and Ba, 2014]" and "Long Short-Term Memory (LSTM) networks [Hochreiter and Schmidhuber, 1997]" but does not specify software versions for these or any other libraries or frameworks.
Experiment Setup Yes General Experimental Setup: For all models in our experiments, the training process was done for 50 epochs, using a batch size of 1024, an Adam optimizer [Kingma and Ba, 2014] with a learning rate of 3e-5, and early stopping to accelerate the training process and prevent overfitting.