PT - JOURNAL ARTICLE AU - Soham Banerjee AU - Diganta Mukherjee TI - Deep Learning Classifier with Piecewise Linear Activation Function: <em>An Empirical Evaluation with Intraday Financial Data</em> AID - 10.3905/jfds.2019.1.018 DP - 2019 Nov 26 TA - The Journal of Financial Data Science PG - jfds.2019.1.018 4099 - https://pm-research.com/content/early/2019/11/26/jfds.2019.1.018.short 4100 - https://pm-research.com/content/early/2019/11/26/jfds.2019.1.018.full AB - Price movement predictions of financial instruments using traditional time-series models with a predefined mathematical structure are common, but this method restricts their ability to learn latent patterns in the data. In recent times, artificial neural networks (ANN) have been able to learn complex hidden patterns from financial datasets using a highly nonlinear architecture. However, most experiments with neural networks require a lot of time to search a suitable network and subsequently train the network. The authors have developed a deep multilayer perceptron (MLP) classifier with a zero-centered piecewise linear unit activation that yielded better classification performance according to their accuracy metric and required consistently less training time compared to similar MLP network with leaky rectified linear unit and tanh activation functions. The authors illustrate their technique with a large high-frequency dataset on selected bank shares from the Indian stock market. The authors also discuss the theoretical properties and advantages of their proposal.TOPICS: Statistical methods, simulations, big data/machine learningKey Findings• Time-series models have been used extensively to understand stock price movements. They are constrained by a predefined structure. Artificial neural network (ANNs), due to their universal approximation properties, offer more flexibility; however, they suffer from long training times. • In this article, the authors introduce a zero-centered piecewise linear unit (PLU), a hybrid activation function derived from tanh and leaky rectified linear unit (ReLU) functions that exhibits the zero-centered property from tanh and the nonzero gradient property from leaky ReLU. • The authors developed a multilayer perceptron (MLP) classifier with PLU activation, which yielded better classification performance according to the accuracy metric and required consistently less training time compared to a similar MLP network with leaky ReLU and tanh activation.