RT Journal Article SR Electronic T1 Deep Reinforcement Learning for Option Replication and Hedging JF The Journal of Financial Data Science FD Institutional Investor Journals SP 44 OP 57 DO 10.3905/jfds.2020.1.045 VO 2 IS 4 A1 Jiayi Du A1 Muyang Jin A1 Petter N. Kolm A1 Gordon Ritter A1 Yixuan Wang A1 Bofei Zhang YR 2020 UL https://pm-research.com/content/2/4/44.abstract AB The authors propose models for the solution of the fundamental problem of option replication subject to discrete trading, round lotting, and nonlinear transaction costs using state-of-the-art methods in deep reinforcement learning (DRL), including deep Q-learning, deep Q-learning with Pop-Art, and proximal policy optimization (PPO). Each DRL model is trained to hedge a whole range of strikes, and no retraining is needed when the user changes to another strike within the range. The models are general, allowing the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. Through a series of simulations, the authors show that the DRL models learn similar or better strategies as compared to delta hedging. Out of all models, PPO performs the best in terms of profit and loss, training time, and amount of data needed for training.TOPICS: Big data/machine learning, options, risk management, simulationsKey Findings• The authors propose models for the replication of options over a whole range of strikes subject to discrete trading, round lotting, and nonlinear transaction costs based on state-of-the-art methods in deep reinforcement learning including deep Q-learning and proximal policy optimization.• The models allow the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios.• A series of simulations demonstrates that the deep reinforcement learning models learn similar or better strategies as compared to delta hedging.• Proximal policy optimization outperforms the other models in terms of profit and loss, training time, and amount of data needed for training.