Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing

Author:

Becker Sebastian1,Jentzen Arnulf23ORCID,Müller Marvin S.4ORCID,von Wurstemberger Philippe15ORCID

Affiliation:

1. Risklab, Department of Mathematics ETH Zurich Zurich Switzerland

2. School of Data Science and Shenzhen Research Institute of Big Data The Chinese University of Hong Kong, Shenzhen Shenzhen China

3. Applied Mathematics: Institute for Analysis and Numerics University of Münster Münster Germany

4. 2Xideas Switzerland AG Küsnacht Switzerland

5. School of Data Science The Chinese University of Hong Kong, Shenzhen Shenzhen China

Abstract

AbstractIn financial engineering, prices of financial products are computed approximately many times each trading day with (slightly) different parameters in each calculation. In many financial models such prices can be approximated by means of Monte Carlo (MC) simulations. To obtain a good approximation the MC sample size usually needs to be considerably large resulting in a long computing time to obtain a single approximation. A natural deep learning approach to reduce the computation time when new prices need to be calculated as quickly as possible would be to train an artificial neural network (ANN) to learn the function which maps parameters of the model and of the financial product to the price of the financial product. However, empirically it turns out that this approach leads to approximations with unacceptably high errors, in particular when the error is measured in the ‐norm, and it seems that ANNs are not capable to closely approximate prices of financial products in dependence on the model and product parameters in real life applications. This is not entirely surprising given the high‐dimensional nature of the problem and the fact that it has recently been proved for a large class of algorithms, including the deep learning approach outlined above, that such methods are in general not capable to overcome the curse of dimensionality for such approximation problems in the ‐norm. In this article we introduce a new numerical approximation strategy for parametric approximation problems including the parametric financial pricing problems described above and we illustrate by means of several numerical experiments that the introduced approximation strategy achieves a very high accuracy for a variety of high‐dimensional parametric approximation problems, even in the ‐norm. A central aspect of the approximation strategy proposed in this article is to combine MC algorithms with machine learning techniques to, roughly speaking, learn the random variables (LRV) in MC simulations. In other words, we employ stochastic gradient descent (SGD) optimization methods not to train parameters of standard ANNs but instead to learn random variables appearing in MC approximations. In that sense, the proposed LRV strategy has strong links to Quasi‐Monte Carlo (QMC) methods as well as to the field of algorithm learning. Our numerical simulations strongly indicate that the LRV strategy might indeed be capable to overcome the curse of dimensionality in the ‐norm in several cases where the standard deep learning approach has been proven not to be able to do so. This is not a contradiction to the established lower bounds mentioned above because this new LRV strategy is outside of the class of algorithms for which lower bounds have been established in the scientific literature. The proposed LRV strategy is of general nature and not only restricted to the parametric financial pricing problems described above, but applicable to a large class of approximation problems. In this article we numerically test the LRV strategy in the case of the pricing of European call options in the Black‐Scholes model with one underlying asset, in the case of the pricing of European worst‐of basket put options in the Black‐Scholes model with three underlying assets, in the case of the pricing of European average put options in the Black‐Scholes model with three underlying assets and knock‐in barriers, as well as in the case of stochastic Lorentz equations. For these examples the LRV strategy produces highly convincing numerical results when compared with standard MC simulations, QMC simulations using Sobol sequences, SGD‐trained shallow ANNs, and SGD‐trained deep ANNs.

Funder

Deutsche Forschungsgemeinschaft

National Natural Science Foundation of China

Publisher

Wiley

Subject

Applied Mathematics,Economics and Econometrics,Social Sciences (miscellaneous),Finance,Accounting

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Learning smooth functions in high dimensions;Handbook of Numerical Analysis;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3