Deep Smoothing of the Implied Volatility Surface
Authors: Damien Ackerer, Natasa Tagasovska, Thibault Vatter
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that this approach is particularly useful when only sparse or erroneous data are available. We also quantify the uncertainty of the model predictions in regions with few or no observations. We further explore how deeper NNs improve over shallower ones, as well as other properties of the network architecture. We benchmark our method against standard IVS models. By evaluating our method on both training sets, and testing sets, namely, we highlight both their capacity to reproduce observed prices and predict new ones. |
| Researcher Affiliation | Collaboration | Damien Ackerer UBS Zürich, Switzerland damien.ackerer@epfl.ch Natasa Tagasovska Swiss Data Science Center Lausanne, Switzerland natasa.tagasovska@sdsc.ch Thibault Vatter Department of Statistics Columbia University New York, USA thibault.vatter@columbia.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | The market data is composed of daily observations of European call options on the S&P 500 index from January to April 2018 and from September to December 2008. The data is retrieved from OptionMetrics. |
| Dataset Splits | Yes | First, we split the daily sample into a training and a testing set. Second, we fit the model on the training set and evaluate its performance on the testing set. We use two different configurations for training and testing. In the interpolation setting, for each maturity, we randomly select half of the contracts. ... In the extrapolation setting, for each maturity, we select half of the contracts whose log moneyness is between the 10% and 90% of the log moneyness in the corresponding slice. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | A total of 50 models with different random seeds for parameter initialization have been trained for 5 000 epochs for each configuration. Figure 2 displays statistics on the losses in (6) for trained models with different number of layers, neurons per layer, and penalty value λ. Finally, we again use three values for λ in order to study how the arbitrage-related penalties affect the results. |