Skip to main content

Main menu

  • Home
  • Current Issue
  • Past Issues
  • Videos
  • Submit an article
  • More
    • About JFDS
    • Editorial Board
    • Published Ahead of Print (PAP)
  • IPR logos x
  • About Us
  • Journals
  • Publish
  • Advertise
  • Videos
  • Webinars
  • More
    • Awards
    • Article Licensing
    • Academic Use
  • Follow IIJ on LinkedIn
  • Follow IIJ on Twitter

User menu

  • Sample our Content
  • Subscribe Now
  • Log in

Search

  • ADVANCED SEARCH: Discover more content by journal, author or time frame
The Journal of Financial Data Science
  • IPR logos x
  • About Us
  • Journals
  • Publish
  • Advertise
  • Videos
  • Webinars
  • More
    • Awards
    • Article Licensing
    • Academic Use
  • Sample our Content
  • Subscribe Now
  • Log in
The Journal of Financial Data Science

The Journal of Financial Data Science

ADVANCED SEARCH: Discover more content by journal, author or time frame

  • Home
  • Current Issue
  • Past Issues
  • Videos
  • Submit an article
  • More
    • About JFDS
    • Editorial Board
    • Published Ahead of Print (PAP)
  • Follow IIJ on LinkedIn
  • Follow IIJ on Twitter

Deep Reinforcement Learning for Option Replication and Hedging

Jiayi Du, Muyang Jin, Petter N. Kolm, Gordon Ritter, Yixuan Wang and Bofei Zhang
The Journal of Financial Data Science Fall 2020, 2 (4) 44-57; DOI: https://doi.org/10.3905/jfds.2020.1.045
Jiayi Du
is a graduate student at New York University Center for Data Science in New York, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Muyang Jin
is a graduate student at New York University Center for Data Science in New York, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Petter N. Kolm
is a clinical professor and director of the Mathematics in Finance Master’s Program at the Courant Institute of Mathematical Sciences at New York University in New York, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Gordon Ritter
is an adjunct professor at the Courant Institute of Mathematical Sciences, New York University Tandon School of Engineering, Baruch College, and Rutgers University and a partner at Ritter Alpha, LP
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yixuan Wang
is a graduate student at New York University Center for Data Science in New York, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bofei Zhang
is a graduate student at New York University Center for Data Science in New York, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • PDF (Subscribers Only)
Loading

Click to login and read the full article.
Don’t have access? Sign up today to begin your trial to the PMR platform 

Abstract

The authors propose models for the solution of the fundamental problem of option replication subject to discrete trading, round lotting, and nonlinear transaction costs using state-of-the-art methods in deep reinforcement learning (DRL), including deep Q-learning, deep Q-learning with Pop-Art, and proximal policy optimization (PPO). Each DRL model is trained to hedge a whole range of strikes, and no retraining is needed when the user changes to another strike within the range. The models are general, allowing the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. Through a series of simulations, the authors show that the DRL models learn similar or better strategies as compared to delta hedging. Out of all models, PPO performs the best in terms of profit and loss, training time, and amount of data needed for training.

TOPICS: Big data/machine learning, options, risk management, simulations

Key Findings

  • • The authors propose models for the replication of options over a whole range of strikes subject to discrete trading, round lotting, and nonlinear transaction costs based on state-of-the-art methods in deep reinforcement learning including deep Q-learning and proximal policy optimization.

  • • The models allow the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios.

  • • A series of simulations demonstrates that the deep reinforcement learning models learn similar or better strategies as compared to delta hedging.

  • • Proximal policy optimization outperforms the other models in terms of profit and loss, training time, and amount of data needed for training.

  • © 2020 Pageant Media Ltd
View Full Text

Don’t have access? Register today to begin unrestricted access to our database of research.

Log in using your username and password

Forgot your user name or password?
PreviousNext
Back to top

Explore our content to discover more relevant research

  • By topic
  • Across journals
  • From the experts
  • Monthly highlights
  • Special collections

In this issue

The Journal of Financial Data Science: 2 (4)
The Journal of Financial Data Science
Vol. 2, Issue 4
Fall 2020
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on The Journal of Financial Data Science.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Deep Reinforcement Learning for Option Replication and Hedging
(Your Name) has sent you a message from The Journal of Financial Data Science
(Your Name) thought you would like to see the The Journal of Financial Data Science web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Deep Reinforcement Learning for Option Replication and Hedging
Jiayi Du, Muyang Jin, Petter N. Kolm, Gordon Ritter, Yixuan Wang, Bofei Zhang
The Journal of Financial Data Science Oct 2020, 2 (4) 44-57; DOI: 10.3905/jfds.2020.1.045

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Save To My Folders
Share
Deep Reinforcement Learning for Option Replication and Hedging
Jiayi Du, Muyang Jin, Petter N. Kolm, Gordon Ritter, Yixuan Wang, Bofei Zhang
The Journal of Financial Data Science Oct 2020, 2 (4) 44-57; DOI: 10.3905/jfds.2020.1.045
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo LinkedIn logo Mendeley logo
Tweet Widget Facebook Like LinkedIn logo

Jump to section

  • Article
    • Abstract
    • DEEP REINFORCEMENT LEARNING
    • AUTOMATIC HEDGING
    • COMPUTATIONAL EXAMPLES
    • CONCLUSIONS
    • ADDITIONAL READING
    • ENDNOTES
    • REFERENCES
  • Info & Metrics
  • PDF (Subscribers Only)
  • PDF (Subscribers Only)

Similar Articles

Cited By...

  • Deep Hedging of Derivatives Using Reinforcement Learning
  • Google Scholar
LONDON
One London Wall, London, EC2Y 5EA
0207 139 1600
 
NEW YORK
41 Madison Avenue, 20th Floor, New York, NY 10010
646 931 9045
pm-research@pageantmedia.com

Stay Connected

  • Follow IIJ on LinkedIn
  • Follow IIJ on Twitter

MORE FROM PMR

  • Home
  • Awards
  • Investment Guides
  • Videos
  • About PMR

INFORMATION FOR

  • Academics
  • Agents
  • Authors
  • Content Usage Terms

GET INVOLVED

  • Advertise
  • Publish
  • Article Licensing
  • Contact Us
  • Subscribe Now
  • Sign In
  • Update your profile
  • Give us your feedback

© 2021 Pageant Media Ltd | All Rights Reserved | ISSN: 2640-3943 | E-ISSN: 2640-3951

  • Site Map
  • Terms & Conditions
  • Privacy Policy
  • Cookies