Financial Thought Experiment: A GAN-based Approach to Vast Robust Portfolio Selection

Authors: Chi Seng Pun, Lei Wang, Hoi Ying Wong

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical studies show that GANr portfolio is more resilient to bleak financial scenarios than CLSGAN and LASSO portfolios.
Researcher Affiliation Academia Chi Seng Pun1 , Lei Wang1 , Hoi Ying Wong2 1School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 2Department of Statistics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
Pseudocode No The paper describes the GANr architecture and training process in prose and with a diagram, but it does not include pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The data can be downloaded from [Pun, 2018b].
Dataset Splits Yes The daily returns of the recent past one year are used as training set... The first training set is divided into 12 folds. In each fold, the chronological order of the data is maintained. Then we carry out 12 folds of cross-validation tests to determine the optimal λ with the highest portfolio return.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory, or specific computer specifications used for running the experiments.
Software Dependencies No The paper mentions optimizers (RMSProp) and neural networks but does not list specific software dependencies with version numbers, such as libraries or frameworks.
Experiment Setup Yes In our empirical studies, the generator consists of five layers...each layer has p nodes. Specifically, the first two hidden layers adopt Re LU activation function with dropout (see [Srivastava et al., 2014]), where the dropout rate is 0.25, and the last two layers (including output layer) use tanh activation function without dropout. As for the discriminator, the three hidden layers are identical to the generator s, while its output layer uses a linear activation function with one output node. The regressor has no hidden layer and uses a linear activation function with one output node and ℓ1 kernel regularizer. The latent noise input follows a uniform distribution... for the first test, we train GANr 10000 times (i.e. 10000 epochs)... reduce the number of epochs to 150... The discriminator and regressor are trained 5 times every training of generator. The batch size is 50. ...The learning rate is set at 0.0001.