Uncertainty-Aware Lookahead Factor Models for Quantitative Investing

Authors: Lakshay Chauhan, John Alberg, Zachary Lipton

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In retrospective analysis, we leverage an industry-grade portfolio simulator (backtester) to show simultaneous improvement in annualized return and Sharpe ratio. Specifically, the simulated annualized return for the uncertainty-aware model is 17.7% (vs 14.0% for a standard factor model) and the Sharpe ratio is 0.84 (vs 0.52).
Researcher Affiliation Collaboration 1Euclidean Technologies, Seattle, USA 2Carnegie Mellon Uni versity, Pittsburgh, USA 3Amazon AI, Seattle, USA. Correspondence to: Lakshay Chauhan <lakshay.chauhan@euclidean.com>, John Alberg <john.alberg@euclidean.com>, Zachary Lipton <zlipton@cmu.edu>.
Pseudocode No The paper describes the 'simulation algorithm' in paragraph form within Section 5 but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets No The paper states: 'Our features consist of reported financial information as archived by the Compustat North America and Compustat Snapshot databases.' Compustat is a commercial database, not a publicly available or open dataset.
Dataset Splits Yes Data in the in-sample period range from Jan 1, 1970 to Dec 31, 1999 (1.2M data points), while out-of-sample test data range from Jan 1, 2000, to Dec 31, 2019 (1M data points). ... we hold out a validation set by randomly select ing 30% of the stocks from the in-sample period.
Hardware Specification Yes It took 150 epochs to train an ensemble on a machine with 16 Intel Xeon E5 cores and 1 Nvidia P100 GPU.
Software Dependencies No The optimizer Ada Delta (D. Zeiler, 2012) is used with an initial learning rate of 0.01. ... Glorot Uniform Intialization (Glorot & Bengio, 2010)... batch normalization (Ioffe & Szegedy, 2015). While specific optimizers and normalization techniques are mentioned, no version numbers for software frameworks (e.g., TensorFlow, PyTorch) or the optimizer itself are provided.
Experiment Setup Yes Table 1. MLP, LSTM Hyperparameters: Batch Size 256, Hidden Units 2048 (MLP) / 512 (LSTM), Hidden Layers 1, Dropout 0.25 (MLP) / 0.0 (LSTM), Recurrent Dropout n/a (MLP) / 0.25 (LSTM), Max Gradient Norm 1, Max Norm 3, α1 0.75 (MLP) / 0.5 (LSTM), α2 n/a (MLP) / 0.7 (LSTM).