TY - JOUR T1 - Deep Reinforcement Learning for Trading JF - The Journal of Financial Data Science DO - 10.3905/jfds.2020.1.030 SP - jfds.2020.1.030 AU - Zihao Zhang AU - Stefan Zohren AU - Roberts Stephen Y1 - 2020/03/16 UR - https://pm-research.com/content/early/2020/03/16/jfds.2020.1.030.abstract N2 - In this article, the authors adopt deep reinforcement learning algorithms to design trading strategies for continuous futures contracts. Both discrete and continuous action spaces are considered, and volatility scaling is incorporated to create reward functions that scale trade positions based on market volatility. They test their algorithms on 50 very liquid futures contracts from 2011 to 2019 and investigate how performance varies across different asset classes, including commodities, equity indexes, fixed income, and foreign exchange markets. They compare their algorithms against classical time-series momentum strategies and show that their method outperforms such baseline models, delivering positive profits despite heavy transaction costs. The experiments show that the proposed algorithms can follow large market trends without changing positions and can also scale down, or hold, through consolidation periods.TOPICS: Futures and forward contracts, exchanges/markets/clearinghouses, statistical methods, simulationsKey Findings• In this article, the authors introduce reinforcement learning algorithms to design trading strategies for futures contracts. They investigate both discrete and continuous action spaces and improve reward functions by using volatility scaling to scale trade positions based on market volatility.• The authors discuss the connection between modern portfolio theory and the reinforcement learning reward hypothesis and show that they are equivalent if a linear utility function is used.• The authors back test their methods on 50 very liquid futures contracts from 2011 to 2019, and their algorithms deliver positive profits despite heavy transaction costs. ER -