Abstract

According to the forecast of stock price trends, investors trade stocks. In recent years, many researchers focus on adopting machine learning (ML) algorithms to predict stock price trends. However, their studies were carried out on small stock datasets with limited features, short backtesting period, and no consideration of transaction cost. And their experimental results lack statistical significance test. In this paper, on large-scale stock datasets, we synthetically evaluate various ML algorithms and observe the daily trading performance of stocks under transaction cost and no transaction cost. Particularly, we use two large datasets of 424 S&P 500 index component stocks (SPICS) and 185 CSI 300 index component stocks (CSICS) from 2010 to 2017 and compare six traditional ML algorithms and six advanced deep neural network (DNN) models on these two datasets, respectively. The experimental results demonstrate that traditional ML algorithms have a better performance in most of the directional evaluation indicators. Unexpectedly, the performance of some traditional ML algorithms is not much worse than that of the best DNN models without considering the transaction cost. Moreover, the trading performance of all ML algorithms is sensitive to the changes of transaction cost. Compared with the traditional ML algorithms, DNN models have better performance considering transaction cost. Meanwhile, the impact of transparent transaction cost and implicit transaction cost on trading performance are different. Our conclusions are significant to choose the best algorithm for stock trading in different markets.

1. Introduction

The stock market plays a very important role in modern economic and social life. Investors want to maintain or increase the value of their assets by investing in the stock of the listed company with higher expected earnings. As a listed company, issuing stocks is an important tool to raise funds from the public and expand the scale of the industry. In general, investors make stock investment decisions by predicting the future direction of stocks’ ups and downs. In modern financial market, successful investors are good at making use of high-quality information to make investment decisions, and, more importantly, they can make quick and effective decisions based on the information they have already had. Therefore, the field of stock investment attracts the attention not only of financial practitioner and ordinary investors but also of researchers in academic [1].

In the past many years, researchers mainly constructed statistical models to describe the time series of stock price and trading volume to forecast the trends of future stock returns [24]. It is worth noting that the intelligent computing methods represented by ML algorithms also present a vigorous development momentum in stock market prediction with the development of artificial intelligence technology. The main reasons are as follows. (1) Multisource heterogeneous financial data are easy to obtain, including high-frequency trading data, rich and diverse technical indicators data, macroeconomic data, industry policy and regulation data, market news, and even social network data. (2) The research of intelligent algorithms has been deepened. From the early linear model, support vector machine, and shallow neural network to DNN models and reinforcement learning algorithms, intelligent computing methods have made significant improvement. They have been effectively applied to the fields of image recognition and text analysis. In some papers, the authors think that these advanced algorithms can capture the dynamic changes of the financial market, simulate the trading process of stock, and make automatic investment decisions. (3) The rapid development of high-performance computing hardware, such as Graphics Processing Units (GPUs), large servers, and other devices, can provide powerful storage space and computing power for the use of financial big data. High-performance computer equipment, accurate and fast intelligent algorithms, and financial big data together can provide decision-making support for programmed and automated trading of stocks, which has gradually been accepted by industry practitioners. Therefore, the power of financial technology is reshaping the financial market and changing the format of finance.

Over the years, traditional ML methods have shown strong ability in trend prediction of stock prices [216]. In recent years, artificial intelligence computing methods represented by DNN have made a series of major breakthroughs in the fields of Natural Language Processing, image classification, voice translation, and so on. It is noteworthy that some DNN algorithms have been applied for time series prediction and quantitative trading [1734]. However, most of the previous studies focused on the prediction of the stock index of major economies in the world ([2, 8, 11, 13, 1517, 22, 29, 30, 32], etc.) or selecting a few stocks with limited features according to their own preferences ([811, 14, 17, 20, 22, 26, 31], etc.) or not considering transaction cost ([10, 14, 17, 23], etc.), or the period of backtesting is very short ([2, 8, 9, 11, 17, 20, 22, 27], etc.). Meanwhile, there is no statistical significance test between different algorithms which were used in stock trading ([811, 32], etc.). That is, the comparison and evaluation of the various trading algorithms lack large-scale stocks datasets, considering transaction cost and statistical significance test. Therefore, the performance of backtesting may tend to be overly optimistic. In this regard, we need to clarify two concerns based on a large-scale stock dataset: (1) whether the trading strategies based on the DNN models can achieve statistically significant results compared with the traditional ML algorithms without transaction cost; (2) how do transaction costs affect trading performance of the ML algorithm? These problems constitute the main motivation of this research and they are very important for quantitative investment practitioners and portfolio managers. These solutions of these problems are of great value for practitioners to do stock trading.

In this paper, we select 424 SPICS and 185 CSICS from 2010 to 2017 as research objects. The SPICS and CSICS represent the industry development of the world's top two economies and are attractive to investors around the world. The stock symbols are shown in the “Data Availability”. For each stock in SPICS and CSICS, we construct 44 technical indicators as shown in the “Data Availability”. The label on the -th trading day is the symbol for the yield of the -th trading day relative to the -th trading day. That is, if the yield is positive, the label value is set to 1, otherwise 0. For each stock, we choose 44 technical indicators of 2000 trading days before December 31, 2017, to build a stock dataset. After the dataset of a stock is built, we choose the walk-forward analysis (WFA) method to train the ML models step by step. In each step of training, we use 6 traditional ML methods which are support vector machine (SVM), random forest (RF), logistic regression (LR), naïve Bayes model (NB), classification and regression tree (CART), and eXtreme Gradient Boosting algorithm (XGB) and 6 DNN models which are widely in the field of text analysis and voice translation such as Multilayer Perceptron (MLP), Deep Belief Network (DBN), Stacked Autoencoders (SAE), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) to train and forecast the trends of stock price based on the technical indicators. Finally, we use the directional evaluation indicators such as accuracy rate (AR), precision rate (PR), recall rate (RR), F1-Score (F1), Area Under Curve (AUC), and the performance evaluation indicators such as winning rate (WR), annualized return rate (ARR), annualized Sharpe ratio (ASR), and maximum drawdown (MDD)) to evaluate the trading performance of these various algorithms or strategies.

From the experiments, we can find that the traditional ML algorithms have a better performance than DNN algorithms in all directional evaluation indicators except for PR in SPICS; in CSICS, DNN algorithms have a better performance in AR, PR, and F1 expert for RR and AUC. (1) Trading performance without transaction cost is as follows: the WR of traditional ML algorithms have a better performance than those of DNN algorithms in both SPICS and CSICS. The ARR and ASR of all ML algorithms are significantly greater than those of the benchmark index (S&P 500 index and CSI 300 index) and BAH strategy; the MDD of all ML algorithms are significantly greater than that of BAH strategy and are significantly less than that of the benchmark index. In all ML algorithms, there are always some traditional ML algorithms whose trading performance (ARR, ASR, MDD) can be comparable to the best DNN algorithms. Therefore, DNN algorithms are not always the best choice, and the performance of some traditional ML algorithms has no significant difference from that of DNN algorithms; even those traditional ML algorithms can perform well in ARR and ASR. (2) Trading performance with transaction cost is as follows: the trading performance (WR, ARR, ASR, and MDD) of all machine learning algorithms is decreasing with the increase of transaction cost as in actual trading situation. Under the same transaction cost structure, the performance reductions of DNN algorithms, especially MLP, DBN, and SAE, are smaller than those of traditional ML algorithms, which shows that DNN algorithms have stronger tolerance and risk control ability to the changes of transaction cost. Moreover, the impact of transparent transaction cost on SPICS is greater than slippage, while the opposite is true on CSICS. Through multiple comparative analysis of the different transaction cost structures, the performance of trading algorithms is significantly smaller than that without transaction cost, which shows that trading performance is sensitive to transaction cost. The contribution of this paper is that we use nonparametric statistical test methods to compare differences in trading performance for different ML algorithms in both cases of transaction cost and no transaction cost. Therefore, it is helpful for us to select the most suitable algorithm from these ML algorithms for stock trading both in the US stock market and the Chinese A-share market.

The remainder of this paper is organized as follows: Section 2 describes the architecture of this work. Section 3 gives the parameter settings of these ML models and the algorithm for generating trading signals based on the ML models mentioned in this paper. Section 4 gives the directional evaluation indicators, performance evaluation indicators, and backtesting algorithms. Section 5 uses nonparameter statistical test methods to analyze and evaluate the performance of these different algorithms in the two markets. Section 6 gives the analysis of impact of transaction cost on performance of ML algorithms for trading. Section 7 gives some discussions of differences in trading performance among different algorithms from the perspective of data, algorithms, transaction cost, and suggestions for algorithmic trading. Section 8 provides a comprehensive conclusion and future research directions.

2. Architecture of the Work

The general framework of predicting the future price trends of stocks, trading process, and backtesting based on ML algorithms is shown in Figure 1. This article is organized from data acquisition, data preparation, intelligent learning algorithm, and trading performance evaluation. In this study, data acquisition is the first step. Where should we get data and what software should we use to get data quickly and accurately are something that we need to consider. In this paper, we use R language to do all computational procedures. Meanwhile, we obtain SPICS and CSICS from Yahoo finance and Netease Finance, respectively. Secondly, the task of data preparation includes ex-dividend/rights for the acquired data, generating a large number of well-recognized technical indicators as features, and using max-min normalization to deal with the features, so that the preprocessed data can be used as the input of ML algorithms [34]. Thirdly, the trading signals of stocks are generated by the ML algorithms. In this part, we train the DNN models and the traditional ML algorithms by a WFA method; then the trained ML models will predict the direction of the stocks in a future time which is considered as the trading signal. Fourthly, we give some widely used directional evaluation indicators and performance evaluation indicators and adopt a backtesting algorithm for calculating the indicators. Finally, we use the trading signal to implement the backtesting algorithm of stock daily trading strategy and then apply statistical test method to evaluate whether there are statistical significant differences among the performance of these trading algorithms in both cases of transaction cost and no transaction cost.

3. ML Algorithms

3.1. ML Algorithms and Their Parameter Settings

Given a training dataset D, the task of ML algorithm is to classify class labels correctly. In this paper, we will use six traditional ML models (LR, SVM, CART, RF, BN, and XGB) and six DNN models (MLP, DBN, SAE, RNN, LSTM, and GRU) as classifiers to predict the ups and downs of the stock prices [34]. The main model parameters and training parameters of these ML learning algorithms are shown in Tables 1 and 2.

In Tables 1 and 2, features and class labels are set according to the input format of various ML algorithms in R language. Matrix (m, n) represents a matrix with m rows and n columns; Array (p, m, n) represents a tensor and each layer of the tensor is Matrix (m, n) and the height of the tensor is p. c (h1, h2, h3, …) represents a vector, where the length of the vector is the number of hidden layers and the -th element of c is the number of neurons of the -th hidden layer. In the experiment, = 250 represents that we use the data of the past 250 trading days as training samples in each round of WFA; = 44 represents that the data of each day has 44 features. In Table 2, the parameters of DNN models such as activation function, learning rate, batch size, and epoch are all default values in the algorithms of R programs.

3.2. WFA Method

WFA [35] is a rolling training method. We use the latest data instead of all past data to train the model and then apply the trained model to implement the prediction for the out-of-sample data (testing dataset) of the future time period. After that, a new training set, which is the previous training set walk one step forward, is carried out the training of the next round. WFA can improve the robustness and the confidence of the trading strategy in real-time trading.

In this paper, we use ML algorithms and the WFA method to do stock price trend predictions as trading signals. In each step, we use the data from the past 250 days (one year) as the training set and the data for the next 5 days (one week) as the test set. Each stock contains data of 2,000 trading days, so it takes (2000-250)/5 = 350 training sessions to produce a total of 1,750 predictions which are the trading signals of daily trading strategy. The WFA method is as shown in Figure 2.

3.3. The Algorithm Design of Trading Signal

In this part, we use ML algorithms as classifiers to predict the ups and downs of the stock in SPICS and CSICS and then use the prediction results as trading signals of daily trading. We use the WFA method to train each ML algorithm. We give the generating algorithm of trading signals according to Figure 2, which is shown in Algorithm 1.

Input: Stock Symbols
Output: Trading Signals
(1) N=Length of Stock Symbols
(2) L=Length of Trading Days
(3) P=Length of Features
(4) k= Length of Training Dataset for WFA
(5) n= Length of Sliding Window for WFA
(6) for (i in 1: N)
(7) Stock=Stock Symbols[i]
(8) M=(L-k)/n
(9) Trading Signal=NULL
(10) for (j in 1:M)
(11) Dataset= Stock[(k+n(j-1)):(k+n+n(j-1)), 1:(P+1)]
(12) Train=Dataset[1:k,1:(1+P)]
(13) Test= Dataset[(k+1):(k+n),1:P]
(14) Model=ML Algorithm(Train)
(15) Probability=Model(Test)
(16) if (Probability>=0.5)
(17) Trading Signal0=1
(18) else
(19) Trading Signal0=0
(20)
(21)
(22) Trading Signal=c (Trading Signal, Trading Signal0)
(23)
(24) return (Trading Signal)

4. Evaluation Indicators and Backtesting Algorithm

4.1. Directional Evaluation Indicators

In this paper, we use ML algorithms to predict the direction of stock price, so the main task of the ML algorithms is to classify returns. Therefore, it is necessary for us to use directional evaluation indicators to evaluate the classification ability of these algorithms.

The actual label values of the dataset are sequences of sets . Therefore, there are four categories of predicted label values and actual label values, which are expressed as TU, FU, FD, and TD. TU denotes the number of UP that the actual label values are UP and the predicted label values are also UP; FU denotes the number of UP that the actual label values are DOWN but the predicted label values are UP; TD denotes the number of DOWN that the actual label values are DOWN and the predicted label values are DOWN; FD denotes the number of DOWN that the actual label values are UP but the predicted label values are DOWN, as shown in Table 3. Table 3 is a two-dimensional table called confusion matrix. It classifies predicted label values according to whether predicted label values match real label values. The first dimension of the table represents all possible predicted label values and the second dimension represents all real label values. When predicted label values equal real label values, they are correct classifications. The correct prediction label values lie on the diagonal line of the confusion matrix. In this paper, what we are concerned about is that when the direction of stock price is predicted to be UP tomorrow, we buy the stock at today’s closing price and sell it at tomorrow’s closing price; when we predict the direction of stock price to be DOWN tomorrow, we do nothing. So UP is a “positive” label of our concern.

In most of classification tasks, AR is generally used to evaluate performance of classifiers. AR is the ratio of the number of correct predictions to the total number of predictions. That is as follows.

In this paper, “UP” is the profit source of our trading strategies. The classification ability of ML algorithm is to evaluate whether the algorithms can recognize “UP”. Therefore, it is necessary to use PR and RR to evaluate classification results. These two evaluation indicators are initially applied in the field of information retrieval to evaluate the relevance of retrieval results.

PR is a ratio of the number of correctly predicted UP to all predicted UP. That is as follows.

High PR means that ML algorithms can focus on “UP” rather than “DOWN”.

RR is the ratio of the number of correctly predicted “UP” to the number of actually labeled “UP”. That is as follows.

High RR can capture a large number of “UP” and be effectively identified. In fact, it is very difficult to present an algorithm with high PR and RR at the same time. Therefore, it is necessary to measure the classification ability of the ML algorithm by using some evaluation indicators which combine PR with RR. F1-Score is the harmonic average of PR and AR. F1 is a more comprehensive evaluation indicator. That is as follows.

Here, it is assumed that the weights of PR and RR are equal when calculating F1, but this assumption is not always correct. It is feasible to calculate F1 with different weights for PR and RR, but determining weights is a very difficult challenge.

AUC is the area under ROC (Receiver Operating Characteristic) curve. ROC curve is often used to check the tradeoff between finding TU and avoiding FU. Its horizontal axis is FU rate and its vertical axis is TU rate. Each point on the curve represents the proportion of TU under different FU thresholds [36]. AUC reflects the classification ability of classifier. The larger the value, the better the classification ability. It is worth noting that two different ROC curves may lead to the same AUC value, so qualitative analysis should be carried out in combination with the ROC curve when using AUC value. In this paper, we use R language package “ROCR” to calculate AUC.

4.2. Performance Evaluation Indicator

Performance evaluation indicator is used for evaluating the profitability and risk control ability of trading algorithms. In this paper, we use trading signals generated by ML algorithms to conduct the backtesting and apply the WR, ARR, ASR, and MDD to do the trading performance evaluation [34]. WR is a measure of the accuracy of trading signals; ARR is a theoretical rate of return of a trading strategy; ASR is a risk-adjusted return which represents return from taking a unit risk [37] and the risk-free return or benchmark is set to 0 in this paper; MDD is the largest decline in the price or value of the investment period, which is an important risk assessment indicator.

4.3. Backtesting Algorithm

Using historical data to implement trading strategy is called backtesting. In research and the development phase of trading model, the researchers usually use a new set of historical data to do backtesting. Furthermore, the backtesting period should be long enough, because a large number of historical data can ensure that the trading model can minimize the sampling bias of data. We can get statistical performance of trading models theoretically by backtesting. In this paper, we get 1750 trading signals for each stock. If tomorrow’s trading signal is 1, we will buy the stock at today’s closing price and then sell it at tomorrow’s closing price; otherwise, we will not do stock trading. Finally, we get AR, PR, RR, F1, AUC, WR, ARR, ASR, and MDD by implementing backtesting algorithm based on these trading signals.

5. Comparative Analysis of Different Trading Algorithms

5.1. Nonparametric Statistical Test Method

In this part, we use the backtesting algorithm(Algorithm 2) to calculate the evaluation indicators of different trading algorithms. In order to test whether there are significant differences between the evaluation indicators of different ML algorithms, the benchmark indexes, and the BAH strategies, it is necessary to use analysis of variance and multiple comparisons to give the answers. Therefore, we propose the following nine basic hypotheses for significance test in which Hja ( = 1, 2, 3, 4, 5, 6, 7, 8, 9) are the null hypothesis, and the corresponding alternative assumptions are Hjb ( = 1, 2, 3, 4, 5, 6, 7, 8, 9). The level of significance is 0.05.

Input: TS #TS is trading signals of a stock.
Output: AR, PR, RR, F1, AUC, WR, ARR, ASR, MDD
(1) N=length of Stock Code List #424 SPICS, and 185 CSICS.
(2) =Benchmark Index [“Closing Price”] # B is the closing price of benchmark index.
(3) WR=NULL; ARR=NULL; ASR=NULL; MDD=NULL
(4) for (i in 1: N)
(5) Stock Data=Stock Code List[i]
(6) =Stock_Data [“Closing Price”]
(7) = Stock_Data [“Label”]
(8) =(-)/ # BDRR is the daily return rate of benchmark index.
(9) = (-)/ #DRR is daily return rate. That is daily return rate of BAH strategy.
(10) =lag ()DR #TDRR is the daily return through trading.
(11) Table=Confusion_Matrix(TS, Label)
(12) AR[i]=sum(adj(Table))/sum(Table)
(13) PR[i]=Table/sum(Table)
(14) RR[i]=Table/sum(Table)
(15) F1=2PR[i]RR[i]/(PR[i]+RR[i])
(16) Pred=prediction (TS, Label)
(17) AUC[i]=performance (Pred, measure=“auc”)@y.values
(18) WR[i]=sum (TDRR>0)/sum(TDRR≠0)
(19) ARR[i]=Return.annualized (TDRR)# TDRR, BDRR, or DRR can be used.
(20) ASR[i]=SharpeRatio.annualized (TDRR)# TDRR, BDRR, or DRR can be used.
(21) MDD[i]=maxDrawDown (TDRR)# TDRR, BDRR, or DRR can be used.
(22) AR=c (AR, AR[i])
(23) PR=c (PR, PR[i])
(24) RR=c (RR, RR[i])
(25) F1=c (F1, F1[i])
(26) AUC=c (AUC, AUC[i])
(27) WR=c (WR, WR[i])
(28) ARR=c (ARR, ARR[i])
(29) ASR=c (ASR, ASR[i])
(30) MDD=c (MDD, MDD[i])
(31)
(32) Performance=cbind (AR, PR, RR, F1, AUC, WR, ARR, ASR, MDD)
(33) return (Performance)

For any evaluation indicator and any trading strategy , , the null hypothesis a is Hja, alternative hypotheses b is Hjb ( = 1, 2, 3, 4, 5, 6, 7, 8, 9 represent AR, PR, RR, F1, AUC, WR, ARR, ASR, MDD, respectively.).Hja: the evaluation indicator j of all strategies are the sameHjb: the evaluation indicator j of all strategies are not the same

It is worth noting that any evaluation indicator of all trading algorithm or strategy does not conform to the basic hypothesis of variance analysis. That is, it violates the assumption that the variances of any two groups of samples are the same and each group of samples obeys normal distribution. Therefore, it is not appropriate to use t-test in the analysis of variance, and we should take the nonparametric statistical test method instead. In this paper, we use the Kruskal-Wallis rank sum test [38] to carry out the analysis of variance. If the alternative hypothesis is established, we will need to further apply the Nemenyi test [39] to do the multiple comparisons between trading strategies.

5.2. Comparative Analysis of Performance of Different Trading Strategies in SPICS

Table 4 shows the average value of various trading algorithms in AR, PR, RR, F1, AUC, WR, ARR, ASR, and MDD. We can see that the AR, RR, F1, and AUC of XGB are the greatest in all trading algorithms. The WR of NB is the greatest in all trading strategies. The ARR of MLP is the greatest in all trading strategies including the benchmark index (S&P 500 index) and BAH strategy. The ASR of RF is the greatest in all trading strategies. The MDD of the benchmark index is the smallest in all trading strategies. It is worth noting that the ARR and ASR of all ML algorithms are greater than those of BAH strategy and the benchmark index.

(1) Through the hypothesis test analysis of H1a and H1b, we can obtain p value<2.2e-16.

Therefore, there are statistically significant differences between the AR of all trading algorithms. Therefore, we need to make multiple comparative analysis further, as shown in Table 5. The number in the table is a p value of any two algorithms of Nemenyi test. When p value<0.05, we think that the two trading algorithms have a significant difference, otherwise we cannot deny the null assumption that the mean values of AR of the two algorithms are equal. From Tables 5 and 4, we can see that the AR of all DNN models are significantly lower than those of all traditional ML models. The AR of MLP, DBN, and SAE are significantly greater than those of RNN, LSTM, and GRU. There are no significant differences among the AR of MLP, DBN, and SAE. There are no significant differences among the AR of RNN, LSTM, and GRU.

(2) Through the hypothesis test analysis of H2a and H2b, we can obtain p value<2.2e-16. So, there are statistically significant differences between the PR of all trading algorithms. Therefore, we need to make multiple comparative analysis further, as shown in Table 6. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 6 and 4, we can see that the PR of MLP, DBN, and SAE are significantly greater than that of other trading algorithms. The PR of LSTM is not significantly different from that of GRU and NB. The PR of GRU is significantly lower than that of all traditional ML algorithms. The PR of NB is significantly lower than that of other traditional ML algorithms.

(3) Through the hypothesis test analysis of H3a and H3b, we can obtain p value<2.2e-16. So, there are statistically significant differences between the RR of all trading algorithms Therefore, we need to make multiple comparative analysis further, as shown in Table 7. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 7 and 4, we can see that there is no significant difference among the RR of all DNN models, but the RR of any DNN model is significantly lower than that of all traditional ML models. The RR of NB is significantly lower than that of other traditional ML algorithms. The RR of CART is significantly lower than that of other traditional ML algorithms except for NB.

(4) Through the hypothesis test analysis of H4a and H4b, we can obtain p value<2.2e-16. So, there are statistically significant differences between the F1 of all trading algorithms. Therefore, we need to make multiple comparative analysis further, as shown in Table 8. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 8 and 4, we can see that there is no significant difference among the F1 of MLP, DBN, and SAE. The F1 of MLP, DBN, and SAE are significantly greater than that of RNN, LSTM, GRU, and NB, but are significantly smaller than that of RF, LR, SVM, and XGB. The F1 of GRU and LSTM have no significant difference, but they are significantly smaller than that of all traditional ML algorithms. The F1 of XGB is significantly greater than that of all other trading algorithms.

(5) Through the hypothesis test analysis of H5a and H5b, we can obtain p value<2.2e-16. So, there are statistically significant differences between the AUC of all trading algorithms. Therefore, we need to make multiple comparative analysis further, as shown in Table 9. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 9 and 4, we can see that there is no significant difference among the AUC of all DNN models. The AUC of all DNN models are significantly smaller than that of any traditional ML model.

(6) Through the hypothesis test analysis of H6a and H6b, we can obtain p value<2.2e-16. So, there are statistically significant differences between the WR of all trading algorithms. Therefore, we need to make multiple comparative analysis further, as shown in Table 10. The number in the table is p value of any two algorithms of Nemenyi test. From Tables 4 and 10, we can see that the WR of MLP, DBN, and SAE have no significant difference, but they are significantly higher than that of BAH and benchmark index, and significantly lower than that of other trading algorithms. The WR of RNN, LSTM, and GRU have no significant difference, but they are significantly higher than that of CART and significantly lower than that of NB and RF. The WR of LR is not significantly different from that of RF, SVM, and XGB.

(7) Through the analysis of the hypothesis test of H7a and H7b, we obtain p value<2.2e-16. Therefore, there are significant differences between the ARR of all trading strategies including the benchmark index and BAH. We need to do further multiple comparative analysis, as shown in Table 11. From Tables 4 and 11, we can see that the ARR of the benchmark index and BAH are significantly lower than that of all ML algorithms. The ARR of MLP, DBN, and SAE are significantly greater than that of RNN, LSTM, GRU, NB, and LR, but not significantly different from that of CART, RF, SVM, and XGB; there is no significant difference between the ARR of MLP, DBN, and SAE. The ARR of RNN, LSTM, and GRU are significantly less than that of CART, but they are not significantly different from that of other traditional ML algorithms. In all traditional ML algorithms, the ARR of CART is significantly greater than that of NB and LR, but, otherwise, there is no significant difference between ARR of any other two algorithms.

(8) Through the hypothesis test analysis of H8a and H8b, we obtain p value<2.2e-16. Therefore, there are significant differences between ASR of all trading strategies including the benchmark index and BAH. The results of our multiple comparative analysis are shown in Table 12. From Tables 4 and 12, we can see that the ASR of the benchmark index and BAH are significantly smaller than that of all ML algorithms. The ASR of MLP and DBN are significantly greater than that of CART and are significantly smaller than that of NB, RF, and XGB, but there is no significant difference between MLP, DBN, and other algorithms. The ASR of SAE is significantly greater than that of CART and significantly less than that of RF and XGB, but there is no significant difference between SAE and other algorithms. The ASR of RNN and LSTM are significantly greater than that of CART and significantly less than that of RF, but there is no significant difference between RNN, LSTM, and other algorithms. The ASR of GRU is significantly greater than that of CART, but there is no significant difference between GRU and other traditional ML algorithms. In all traditional ML algorithms, the ASR of all algorithms are significantly greater than that of CART, but otherwise, there is no significant difference between ASR of any other two algorithms.

(9) Through the hypothesis test analysis of H9a and H9b, we obtain p value<2.2e-16. Therefore, there are significant differences between MDD of trading strategies including the benchmark index and the BAH. The results of multiple comparative analysis are shown in Table 13. From Tables 4 and 13, we can see that MDD of any ML algorithm is significantly greater than that of the benchmark index but significantly smaller than that of BAH strategy. The MDD of MLP and DBN are significantly smaller than those of GRU, RF, and XGB, but there is no significant difference between MLP, DBN, and other algorithms. The MDD of SAE is significantly smaller than that of XGB, but there is no significant difference between SAE and other algorithms. Otherwise, there is no significant difference between MDD of any other two algorithms.

In a word, the traditional ML algorithms such as NB, RF, and XGB have good performance in most directional evaluation indicators such as AR, PR, and F1. The DNN algorithms such as MLP have good performance in PR and ARR. In traditional ML algorithms, the ARR of CART, RF, SVM, and XGB are not significantly different from that of MLP, DBN, and SAE; the ARR of CART is significantly greater than that of LSTM, GRU, and RNN, but otherwise the ARR of all traditional ML algorithms are not significantly worse than that of LSTM, GRU, and RNN. The ASR of all traditional ML algorithms except CART are not significantly worse than that of the six DNN models; even the ASR of NB, RF, and XGB are significantly greater than that of some DNN algorithms. The MDD of RF and XGB are significantly less that of MLP, DBN, and SAE; the MDD of all traditional ML algorithms are not significantly different from that of LSTM, GRU, and RNN. The ARR and ASR of all ML algorithms are significantly greater than that of BAH and the benchmark index; the MDD of any ML algorithm is significantly greater than that of the benchmark index, but significantly less than that of BAH strategy.

5.3. Comparative Analysis of Performance of Different Trading Strategies in CSICS

The analysis methods of this part are similar to Section 5.2. From Table 14, we can see that the AR, PR, and F1 of MLP are the greatest in all trading algorithms. The RR, AUC, WR, and ASR of LR are the greatest in all trading algorithms, respectively. The ARR of NB is the highest in all trading strategies. The MDD of CSI 300 index (benchmark index) is the smallest in all trading strategies. The WR, ARR, and ASR of all ML algorithms are greater than those of the benchmark index and BAH strategy.

(1) Through the hypothesis test analysis of H1a and H1b, we can obtain p value<2.2e-16. Therefore, there are significant differences between the AR of all trading algorithms. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 15. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 14 and 15, we can see that the AR of MLP, DBN, and SAE have no significant difference, but they are significantly greater than that of all other trading algorithms except for SVM. The AR of GRU is significantly smaller than that of all traditional ML algorithms. There is no significant difference between the AR of any two traditional ML algorithms except for CART and SVM.

(2) Through the hypothesis test analysis of H2a and H2b, we can obtain p value<2.2e-16. Therefore, there are significant differences between the PR of all trading algorithms. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 16. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 14 and 16, we can see that the PR of MLP, DBN, and SAE are significantly greater than that of all other trading algorithms, and the PR of MLP, DBN, and SAE have no significant difference. The PR of SVM is significantly greater than that of all other traditional ML algorithms which have no significant difference between any two algorithms except for SVM. The PR of RNN is significantly greater than that of all traditional ML algorithms except for SVM. The PR of GRU and LSTM are not significantly different from that of all traditional ML algorithms except for SVM and LR.

(3) Through the hypothesis test analysis of H3a and H3b, we can obtain p value<2.2e-16. Therefore, there are significant differences between the RR of all trading algorithms. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 17. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 14 and 17, we can see that the RR of all DNN models are not significantly different. There is no significant difference among the RR of all traditional ML algorithms. The RR of RNN, GRU, and LSTM are significantly smaller than that of any traditional ML algorithm except for CART.

(4) Through the hypothesis test analysis of H4a and H4b, we can obtain p value<2.2e-16. Therefore, there are significant differences between the F1 of all trading algorithms. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 18. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 14 and 18, we can see that the F1 of MLP, DBN, and SAE have no significant difference, but they are significantly greater than that of all other trading algorithms. There is no significant difference among traditional ML algorithms except SVM, and the F1 of SVM is significantly greater than that of all other traditional ML algorithms.

(5) Through the hypothesis test analysis of H5a and H5b, we can obtain p value<2.2e-16. Therefore, there are significant differences between the AUC of all trading algorithms. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 19. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 14 and 19, we can see that the AUC of all DNN models have no significant difference. There is no significant difference between the AUC of all traditional ML algorithms. The AUC of all traditional ML algorithms except for CART are significantly greater than that of any DNN model. There is no significant difference among the AUC of MLP, SAE, DBN, RNN, and CART.

(6) Through the hypothesis test analysis of H6a and H6b, we can obtain p value<2.2e-16. Therefore, there are significant differences between the WR of all trading algorithms. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 20. The number in the table is a p value of any two algorithms of Nemenyi test. From Tables 14 and 20, we can see that the WR of BAH and benchmark index have no significant difference, but they are significantly smaller than that of any ML algorithm. The WR of MLP, DBN, and SAE are significantly smaller than that of the other trading algorithms, but there is no significant difference between the WR of MLP, DBN, and SAE. The WR of LSTM and GRU have no significant difference, but they are significantly smaller than that of XGB and significantly greater than that of CART and NB. In traditional ML models, the WR of NB and CART are significantly smaller than that of other algorithms. The WR of XGB is significantly greater than that of all other ML algorithms.

(7) Through the analysis of the hypothesis test of H7a and H7b, we obtain p value<2.2e-16.

Therefore, there are significant differences between the ARR of all trading strategies including the benchmark index and BAH strategy. Therefore, we need to do further multiple comparative analysis and the results are shown in Table 21. From Tables 14 and 21, we can see that ARR of the benchmark index and BAH are significantly smaller than that of all trading algorithms. The ARR of MLP is significantly higher than that of RF, but there is no significant difference between MLP and other algorithms. The ARR of SAE and DBN are significantly higher than that of RF and XGB, but they are not significantly different from ARR of other algorithms. The ARR of NB is significantly higher than that of RF, SVM, and XGB. But, otherwise, there is no significant difference between any other two algorithms. Therefore, the ARR of most traditional ML models are not significantly worse than that of the best DNN model.

(8) Through the hypothesis test analysis of H8a and H8b, we obtain p value<2.2e-16. Therefore, There are significant differences between ASR of all trading strategies including the benchmark index and BAH strategy. The results of multiple comparative analysis are shown in Table 22. From Tables 14 and 22, we can see that the ASR of the benchmark index and BAH are significantly smaller than that of all trading algorithms. The ASR of all ML algorithms are significantly higher than that of CART and NB, but there is no significant difference between the ASR of CART and NB. Beyond that, there is no significant difference between any other two algorithms. Therefore, the ASR of all traditional ML models except NB and CART are not significantly worse than that of any DNN model.

(9) Through the hypothesis test analysis of H9a and H9b, we obtain p value<2.2e-16. Therefore, there are significant differences between the MDD of these trading strategies including the benchmark index and the BAH strategy. The results of multiple comparative analysis are shown in Table 23. From Tables 14 and 23, we can see that the MDD of the benchmark index is significantly smaller than that of other trading strategies including BAH strategy. The MDD of BAH is significantly greater than that of all trading algorithms except NB. The MDD of MLP, DBN, and SAE are significantly lower than that of NB, but significantly higher than that of RNN, LSTM, GRU, LR, and XGB. The MDD of NB is significantly greater than that of all other trading algorithms. Beyond that, there is no significant difference between any other two algorithms. Therefore, all ML algorithms expect NB, especially LSTM, RNN, GRU, LR, and XGB, can play a role in controlling trading risk.

In a word, some DNN models such as MLP, DBN, and SAE have good performance in AR, PR, and F1; traditional ML algorithms such as LR and XGB have good performance in AUC and WR. The ARR of some traditional ML algorithms such as CART, NB, LR, and SVM are not significantly different from that of the six DNN models. The ASR of the six DNN algorithms are not significantly different from all traditional ML models except NB and CART. The MDD of LR and XGB are significantly smaller than those of MLP, DBN, and SAE, and are not significantly different from that of LSTM, GRU, and RNN. The ARR and ASR of all ML algorithms are significantly greater than those of BAH and benchmark index; the MDD of all ML algorithms are significantly smaller than that of the benchmark index but significantly greater than that of BAH strategy.

From the above analysis and evaluation, we can see that the directional evaluation indicators of some DNN models are very competitive in CSICS, while the indicators of some traditional ML algorithms have excellent performance in SPICS. Whether in SPICS or CSICS, the ARR and ASR of all ML algorithms are significantly greater than that of the benchmark index and BAH strategy, respectively. In all ML algorithms, there are always some traditional ML algorithms which are not significantly worse than the best DNN model for any performance evaluation indicator (ARR, ASR, and MDD). Therefore, if we do not consider transaction cost and other factors affecting trading, performance of DNN models are alternative but not the best choice when they are applied to stock trading.

In the same period, the ARR of any ML algorithm in CSICS is significantly greater than that of the same algorithm in SPICS (p value <0.001 in the Nemenyi test). Meanwhile, the MDD of any ML algorithm in CSICS is significantly greater than that of the same algorithm in SPICS (p value <0.001 in the Nemenyi test). The results show that the quantitative trading algorithms can more easily obtain excess returns in the Chinese A-share market, but the volatility risk of trading in Chinese A-share market is significantly higher than that of the US stock market in the past 8 years.

6. The Impact of Transaction Cost on Performance of ML Algorithms

Trading cost can affect the profitability of a stock trading strategy. Transaction cost that can be ignored in long-term strategies is significantly magnified in daily trading. However, many algorithmic trading studies assume that transaction cost does not exist ([10, 17], etc.). In practice, frictions such as transaction cost can distort the market from the perfect model in textbooks. The cost known prior to trading activity is referred to as transparent such as commissions, exchange fees, and taxes. The costs that has to be estimated are known as implicit, including comprise bid-ask spread, latency or slippage, and related market impact. This section focuses on the transparent and implicit cost and how do they affect trading performance in daily trading.

6.1. Experimental Settings and Backtesting Algorithm

In this part, the transparent transaction cost is calculated by a certain percentage of transaction turnover for convenience; the implicit transaction cost is very complicated in calculation, and it is necessary to make a reasonable estimate for the random changes of market environment and stock prices. Therefore, we only discuss the impact of slippage on trading performance.

The transaction cost structures of American stocks are similar to that of Chinese A-shares. We assume that transparent transaction cost is calculated by a percentage of turnover such as less than 0.5% [40, 41] and 0.2% and 0.5% in the literature [42]. It is different for the estimation of slippage.

In some quantitative trading simulation software such as JoinQuant [43] and Abuquant [44], the slippage is set to 0.02. The transparent transaction cost and implicit transaction cost are charged in both directions when buying and selling. It is worth noting that the transparent transaction cost varies with the different brokers, while the implicit transaction cost is related to market liquidity, market information, network status, trading software, etc.

We set slippages s = 0=0, s1=0.01, s2=0.02, s3=0.03, s4=0.0; the transparent transaction cost c = 0=0, c1=0.001, c2=0.002, c3=0.003, c4=0.004, c5=0.00. For different combinations, we study the impact of different transaction cost structures on trading performance. We assume that buying and selling positions are one unit, so the turnover is the corresponding stock price. When buying stocks, we not only need to pay a certain percentage cost of the purchase price, but also need to pay an uncertain slippage cost. That is, we need to pay a higher price than the real-time price when we are buying. But, when selling stocks, we not only need to pay a certain percentage cost of the selling price, but also to pay an uncertain slippage cost. Generally speaking, we need to sell at a price lower than the real-time price . It is worth noting that our trading strategy is self-financing. If ML algorithms predict the continuous occurrence of buying signals or selling signals, i.e., , we will continue to hold or do nothing, so the transaction cost at this time is 0. when , it is indicated that the position may be changed from holding to selling or from empty position to buying. At this time, we would pay transaction cost due to the trading operation. Finally, we get a real yield is where denotes the -th closing price, denotes the -th trading signal, denotes the -th executing price, and denotes the -th return rate.

We propose a backtesting algorithm with transaction cost based on the above analysis, as is shown in Algorithm 3.

Input: TS #TS is trading signals of a stock.
s # s is slippage.
c # c is transparent transaction cost.
Output: WR, ARR, ASR, MDD
(1) N=length of Stock Code List #424 SPICS, and 185 CSICS.
(2) WR=NULL; ARR=NULL; ASR=NULL; MDD=NULL
(3) for (i in 1: N)
(4) Stock Data=Stock Code List[i]
(5) =Stock_Data [“Closing Price”]
(6) =Close   (1-cabs(T-T)) - sabs(T-T)
(7) =(1+cabs(-)) + sabs(T-T)
(8) = (- )/ # Ret is the return rate series.
(9) TDRR=lag (TS)Ret #TDRR is the daily return through trading.
(10) WR[i]=sum (TDRR>0)/sum(TDRR≠0)
(11) ARR[i]=Return.annualized (TDRR)
(12) ASR[i]=SharpeRatio.annualized (TDRR)
(13) MDD[i]=maxDrawDown (TDRR)
(14) WR=c (WR, WR[i]);
(15) ARR=c (ARR, ARR[i]);
(16) ASR=c (ASR, ASR[i]);
(17) MDD=c (MDD, MDD[i])
(18)
(19) return (WR, ARR, ASR, MDD)
6.2. Analysis of Impact of Transaction Cost on the Trading Performance of SPICS

Transaction cost is one of the most important factors affecting trading performance. In US stock trading, transparent transaction cost can be charged according to a fixed fee per order or month, or a floating fee based on the volume and turnover of each transaction. Sometimes, customers can also negotiate with broker to determine transaction cost. The transaction cost charged by different brokers varies greatly. Meanwhile, implicit transaction cost is not known beforehand and the estimations of them are very complex. Therefore, we assume that the percentage of turnover is the transparent transaction cost for ease of calculation. In the aspect of implicit transaction cost, we only consider the impact of slippage on trading performance.

(1) Analysis of Impact of Transaction Cost on WR. As can be seen from Table 24, WR is decreasing with the increase of transaction cost for any trading algorithm, which is intuitive. When the transaction cost is set to (s, c) = (0.04, 0.005), the WR of each algorithm is the lowest. Compared with setting (s, c) = (0, 0), the WR of MLP, DBN, SAE, RNN, LSTM, GRU, CART, NB, RF, LR, and SVM to XGB are reduced by 5.80%, 5.97%, 5.91%, 15.83%, 18.04%, 13.95%, 21.71%, 16.04%, 22.16%, 18.54%, 18.50%, and 25.97%, respectively. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. Generally speaking, the DNN models have stronger capacity to accommodate transaction cost than the traditional ML models. From the single trading algorithm such as MLP, if we do not consider slippage, i.e., s=0, the average WR of MLP is 0.5510 under transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5); if we do not consider transparent transaction cost, i.e., c=0, the average WR of MLP is 0.5618 under transaction cost structures (s1, c0), (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has greater impact than slippage. Through multiple comparative analysis, the WR under the transaction cost structure (s1, c0) is not significantly different from the WR without transaction cost for MLP, DBN, and SAE. The WR under all other transaction cost structures are significantly smaller than the WR without transaction cost. For all trading algorithms except for MLP, DBN, and SAE, the WR under the transaction cost structure (s1, c0), (s2, c0) are not significantly different from the WR without transaction cost; the WR under all other transaction cost structures are significantly smaller than the WR without transaction cost.

(2) Analysis of Impact of Transaction Cost on ARR. As can be seen from Table 25, ARR is decreasing with the increase of transaction cost for any trading algorithm. Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the ARR of each algorithm is the lowest. Compared with the settings without transaction cost, the ARR of MLP, DBN, and SAE reduce by 40.31%, 41.57%, and 40.93%, respectively, while the ARR of other trading algorithms decrease by more than 100% compared with those without transaction cost. Therefore, excessive transaction cost can lead to serious losses in accounts. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), ARR of MLP, DBN, and SAE decrease by 23.26%, 24.00%, and 23.61%, respectively, while the ARR of other algorithms decrease by more than 50% and that of CART and XGB decrease by more than 100%. Therefore, MLP, DBN, and SAE are more tolerant to high transaction cost. From single trading algorithm such as RNN, if we do not consider slippage, i.e., s=0, the average ARR of RNN is 0.1434 under the transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5); if we do not consider transparent transaction cost, i.e., c=0, the average ARR of RNN is 0.2531 under the transaction cost structure (s1, c0), (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has greater impact than slippage. Through multiple comparative analysis, the ARR under the transaction cost structures (s1, c0), (s2, c0), (s3, c0), (s0, c1) are not significantly different from the ARR without transaction cost for MLP, DBN, and SAE; the ARR under all other transaction cost structures are significantly smaller than the ARR without transaction cost. For all trading algorithms except for MLP, DBN, and SAE, the ARR under the transaction cost structures (s1, c0), (s2, c0) are not significantly different from the ARR without transaction cost; the ARR under all other transaction cost structures are significantly smaller than the ARR without transaction cost.

(3) Analysis of Impact of Transaction Cost on ASR. As can be seen from Table 26, ASR is decreasing with the increase of transaction cost for any trading algorithm. Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the ASR of each algorithm is the lowest. Compared with setting without transaction cost, the ASR of MLP, DBN, and SAE reduce by 39.97%, 41.23%, and 40.66%, respectively, while the ASR of other trading algorithms reduce by more than 90% compared with the case of no transaction cost. Therefore, excessive transaction cost will significantly reduce ASR. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), the ASR of MLP, DBN, and SAE decrease by 22.62%, 23.36% and 23.02% respectively. while the ASR of other algorithms decrease by more than 50%; the ASR of CART and XGB decrease by more than 100%. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. From single trading algorithm such as NB, if we do not consider slippage, i.e., s=0, the average ASR of NB is 0.8052 under the transaction cost structure , (s0, c2), (s0, c3), (s0, c4), ; if we do not consider transparent transaction cost, i.e., c=0, the average ASR of NB is 1.4182 under the transaction cost structures , (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has greater impact than slippage. Through multiple comparative analysis, the ASR under the transaction cost structures , (s2, c0), (s3, c0), are not significantly different from the ASR without transaction cost for MLP, DBN, and SAE; the ASR under all other transaction cost structures are significantly smaller than the ASR without transaction cost. For all trading algorithms except for MLP, DBN, and SAE, the ASR under the transaction cost structures , are not significantly different from the ASR without transaction cost; the ASR under all other transaction cost structures are significantly smaller than the ASR without transaction cost.

(4) Analysis of Impact of Transaction Cost on MDD. As can be seen from Table 27, MDD increases with the increase of transaction cost for any trading algorithm. Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the MDD of each algorithm increases to the highest level. In this case, compared with the settings without transaction cost, the MDD of MLP, DBN, and SAE increase by 9.32%, 11.08%, and 10.32%, respectively. The MDD of other trading algorithms increase by more than 80% compared with those without considering transaction cost. Therefore, excessive transaction cost can cause serious potential losses to the account. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), the MDD of MLP, DBN, and SAE increase by 4.83%, 5.80%, and 5.33%, respectively, while the MDD of other algorithms increase by more than 35%, and the MDD of CART, RF, and XGB increase by more than 100%. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. As a whole, the DNN models have stronger capacity to accommodate transaction cost than the traditional ML models. From single trading algorithm such as GRU, if we do not consider slippage, i.e., s=0, the average MDD of GRU is 0.4459 under the transaction cost structures , (s0, c2), (s0, c3), (s0, c4), ; if we do not consider transparent transaction cost, i.e., c=0, the average MDD of GRU is 0.3559 under the transaction cost structures , (s2, c0), (s3, c0), ; so transparent transaction cost has greater impact than slippage. Through multiple comparative analysis, the MDD under any the transaction cost structure is not significantly different from the MDD without transaction cost for MLP, DBN, and SAE. For all trading algorithms except for MLP, DBN, and SAE such as LR, the MDD under the transaction cost structures , (s1, c0), (s2, c0), are not significantly different from the MDD without transaction cost; the MDD under all other transaction cost structures are significantly greater than the MDD without transaction cost.

Through the analysis of the Table 27 performance evaluation indicators, we find that trading performance after considering transaction cost will be worse than that without considering transaction cost as is in actual trading situation. It is noteworthy that the performance changes of DNN algorithms, especially MLP, DBN, and SAE, are very small after considering transaction cost. This shows that the three algorithms have good tolerance to changes of transaction cost. Especially for the MDD of the three algorithms, there is no significant difference with that with no transaction cost. So, we can consider applying them in actual trading. Meanwhile, we conclude that the transparent transaction cost has greater impact on the trading performances than the slippage for SPICS. This is because the prices of SPICS are too high when the transparent transaction cost is set to a certain percentage of turnover. In actual transactions, special attention needs to be paid to the fact that the transaction performance under most transaction cost structures is significantly lower than the trading performance without considering transaction cost. It is worth noting that the performance of traditional ML algorithm is not worse than that of DNN algorithms without considering transaction cost, while the performance of DNN algorithms is better than that of traditional ML algorithms after considering transaction cost.

6.3. Analysis of Impact of Transaction Cost on the Trading Performance of CSICS

Similar to Section 6.2, we will discuss the impact of transaction cost on trading performance of CSICS in the followings. In the Chinese A-share market, the transparent transaction cost is usually set to a certain percentage of turnover, and it is the same as the assumption in the experimental settings. As in the US stock market, the smallest unit of price change is 0.01 (one tick). It is reasonable to set slippage to be 0.01-0.05. Of course, it should be noted that the prices fluctuation may be more intense when closing than that in the middle of a trading day.

(1) Analysis of Impact of Transaction Cost on WR. As can be seen from Table 28, the WR is decreasing with the increase of transaction cost for any trading algorithm. When the transaction cost is set to (s, c) = (0.04, 0.005), the WR of each algorithm is the smallest. Compared with the settings without transaction cost, the WR of MLP, DBN, SAE, RNN, LSTM, GRU, CART, NB, RF, LR, SVM, and XGB are reduced by 6.71%, 6.88%, 6.97%, 22.69%, 17.26%, 15.48%, 24.30%, 14.91%, 24.84%, 21.12%, 21.12%, and 29.19%, respectively. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), the WR of MLP, DBN, and SAE decrease by 4.10%, 4.20%, and 4.30%, respectively, while the WR of other algorithms decrease by more than 9%; the WR of CART, RF, and XGB decrease by more than 15%. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. From single trading algorithm such as LSTM, if we do not consider slippage, i.e., s=0, the average WR of DBN is 0.5417 under the transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5); if we do not consider transparent transaction cost, i.e., c=0, the average WR of LSTM is 0.5304 under the transaction cost structures (s1, c0), (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has smaller impact than slippage. Through multiple comparative analysis, the WR under the transaction cost structures (s0, c1), (s0, c2), (s1, c0) are not significantly different from the WR without transaction cost for MLP, DBN, SAE, and NB; the WR under all other transaction cost structures are significantly smaller than the WR without transaction cost. For all trading algorithms except for MLP, DBN, SAE, and NB, the WR under the transaction cost structure (s0, c1) is not significantly different from the WR without transaction cost; the WR under all other transaction cost structures are significantly smaller than the WR without transaction cost.

(2) Analysis of Impact of Transaction Cost on ARR. As can be seen from Table 29, ARR is decreasing with the increase of transaction cost for any trading algorithm. Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the ARR of each algorithm is the smallest. Compared with the settings without transaction cost, the ARR of MLP, DBN, and SAE reduce by 50.73%, 51.75%, and 52.25%, respectively. While the ARR of other trading algorithms decrease by more than 100% compared with those algorithms without transaction cost. Therefore, excessive transaction cost can lead to serious losses in the accounts. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), ARR of MLP, DBN, and SAE decrease by 27.41%, 27.97%, and 28.25% respectively, while the ARR other algorithms decrease by more than 50% and that of CART, NB, RF, and XGB decrease by more than 100%. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. From single trading algorithm such as SAE, if we do not consider slippage, i.e., s=0, the average ARR of SAE is 0.5040 under the transaction cost structure (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5); if we do not consider transparent transaction cost, i.e., c=0, the average ARR of SAE is 0.4468 under the transaction cost structures (s1, c0), (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has smaller impact than slippage. Through multiple comparative analysis, the ARR under the transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c1), (s1, c1) are not significantly different from the ARR without transaction cost for MLP, DBN, and SAE; the ARR under all other transaction cost structures are significantly smaller than the ARR without transaction cost. For RNN, LSTM, GRU, CART, RF, LR, and SVM, the ARR under the transaction cost structures (s0, c1), (s0, c2), (s1, c0) are not significantly different from the ARR without transaction cost; the ARR under all other transaction cost structures are significantly smaller than the ARR without transaction cost. For NB and XGB, the ARR under the transaction cost structures (s0, c1), (s1, c0) are not significantly different from the ARR without transaction cost; the ARR under all other transaction cost structures are significantly smaller than the ARR without transaction cost.

(3) Analysis of Impact of Transaction Cost on ASR. As can be seen from Table 30, ASR is decreasing with the increase of transaction cost for any trading algorithm. Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the ASR of each algorithm is the smallest. Compared with the settings without transaction cost, the ASR of MLP, DBN, and SAE reduce by 48.99%, 50.11%, and 50.70%, respectively, while the ASR of other trading algorithms decrease by more than 100% compared with those without transaction cost. Therefore, excessive transaction cost can lead to serious losses in the accounts. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), ASR of MLP, DBN, and SAE decrease by 26.01%, 26.61%, and 26.94%, respectively, while the ASR other algorithms decrease by more than 50% and that of CART, NB, RF, and XGB decrease by more than 100%. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. From single trading algorithm such as LSTM, if we do not consider slippage, i.e., s=0, the average ASR of LSTM is 1.1129 under the transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5); if we do not consider transparent transaction cost, i.e., c=0, the average ASR of LSTM is 0.8837 under the transaction cost structures (s1, c0), (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has smaller impact than slippage. Through multiple comparative analysis, the ASR under the transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c1), (s1, c1) are not significantly different from the ASR without transaction cost for MLP, DBN, and SAE; the ASR under all other transaction cost structures are significantly smaller than the ASR without transaction cost. For LSTM and GRU, the ASR under the transaction cost structures (s0, c1), (s0, c2), (s1, c0) are not significantly different from the ASR without transaction cost; the ASR under all other transaction cost structures are significantly smaller than the ASR without transaction cost. For RNN, NB, RF, LR, and SVM, the ASR under the transaction cost structures (s0, c1), (s1, c0) are not significantly different from the ASR without transaction cost; the ASR under all other transaction cost structures are significantly smaller than the ASR without transaction cost. For CART and XGB, the ASR under the transaction cost structure (s0, c1) are not significantly different from the ASR without transaction cost; the ASR under all other transaction cost structures are significantly smaller than the ASR without transaction cost.

(4) Analysis of Impact of Transaction Cost on MDD. As can be seen from Table 31, MDD increases with the increase of transaction cost for any transaction algorithm. Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the MDD of each algorithm increases to the highest level. In this case, compared with the setting without transaction cost, the MDD of MLP, DBN, and SAE increase by 10.31%, 11.35%, and 10.83%, respectively. The MDD of the other transaction algorithms increases by more than 30% compared with those without transaction cost. Therefore, excessive transaction cost can cause serious potential losses to the account. For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), the MDD of MLP, DBN, and SAE increase by 4.31%, 4.81%, and 4.80%, respectively. While the MDD of the other algorithms increase by more than 20%, the MDD of CART, RF, and XGB increase by more than 60%. Therefore, MLP, DBN, and SAE are more tolerant to transaction cost. From a single trading algorithm such as RNN, if we do not consider slippage, i.e., s=0, the average MDD of RNN is 0.7402 under the transaction cost structures (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5); if we do not consider transparent transaction cost, i.e., c=0, the average MDD of RNN is 0.7754 under the transaction cost structures (s1, c0), (s2, c0), (s3, c0), (s4, c0); so transparent transaction cost has smaller impact than slippage. Through multiple comparative analysis, the MDD under most of the transaction cost structures are not significantly different from the MDD without transaction cost for MLP, DBN, and SAE. It shows that the three algorithms have higher tolerance for transaction cost. For all trading algorithms except for MLP, DBN, and SAE, the MDD under the transaction cost structures (s0, c1), (s0, c2), (s1, c0) are not significantly different from the MDD without transaction cost; the MDD under all other transaction cost structures are significantly greater than the MDD without transaction cost. It is worth noting that the MDD of GRU under the transaction cost structure (s1, c1) is not significantly different from the MDD without transaction cost.

Through the Table 31 analysis, we find that trading performance will become worse and worse with the increase of transaction cost. Moreover, excessive transaction cost may cause huge losses. Especially, for some traditional ML algorithm, the ARR and ASR of those algorithms will become negative. MDD of the algorithms will become close to 100% when transaction cost is increasing. DNN models, especially MLP, DBN, and SAE, are more tolerant to the changes of transaction cost and are more suitable for actual trading activities. Meanwhile, the experimental results indicate that the impact of slippage on trading performance is greater than the transparent transaction cost because the prices of CSICS are generally small. We conclude that a certain percentage of turnover will generate smaller transaction cost. Through multiple comparative analysis, we find that the performance of these algorithms under most of transaction cost structures may be significantly worse than those without considering transaction cost. The finding shows that the trading performance of these algorithms is very sensitive to transaction cost, which needs to be paid enough attention to in actual trading activities.

7. Discussion

Forecasting the future ups and downs of stock prices and making trading decisions are always challenging tasks. However, more and more investors are attracted to participate in trading activities by high return of stock market, and high risk promotes investors to try their best to construct profitable trading strategies. Meanwhile, the fast changing of financial markets, the explosive growth of big financial data, the increasing complexity of financial investment instruments, and the rapid capture of trading opportunities provide more and more research topics for academic circles. In this paper, we apply some popular and widely used ML algorithms to do stock trading. Our purpose is to explore whether there are significant differences in stock trading performance among different ML algorithms. Moreover, we study whether we can find highly profitable trading algorithms in the presence of transaction cost.

Financial data, which is generated in changing financial market, are characterized by randomness, low signal-to-noise ratio, nonlinearity, and high dimensionality. Therefore, it is difficult to find inherent patterns in financial big data by using algorithms. In this paper, we also prove this point.

When using ML algorithms to predict stock prices, the directional evaluation indicators are not as good as expected. For example, the AR, PR, and RR of LSTM and RNN are about 50%-55%, which are only slightly better than random guess. On the contrary, some traditional ML algorithms such as XGB have stronger ability in directional predictions of stock prices. Therefore, those simple models are less likely to cause overfitting when capturing intrinsic patterns of financial data and can make better predictions about the directions of stock price changes. Actually, we assume that sample data are independent and identically distributed when using ML algorithm to classify tasks. DNN algorithms such as LSTM and RNN make full use of autocorrelation of financial time series data, which is doubtful because of the characteristics of financial data. Therefore, the prediction ability of these algorithms may be weakened because of the noise of historical lag data.

From the perspective of trading algorithms, traditional ML models map the feature space to the target space. The parameters of the learning model are quite few. Therefore, the learning goal can be better accomplished in the case of fewer data. The DNN models mainly connect some neurons into multiple layers to form a complex DNN structure. Through the complex structure, the mapping relationships between input and output are established. As the number of neural network layers increases, the weight parameters can be automatically adjusted to extract advanced features. Compared with the traditional ML models, DNN models have more parameters. So their performance tends to increase as the amount of data grows. Complex DNN models need a lot of data to avoid underfitting and overfitting. However, we only use the data for 250 trading days (one year) as training set to construct trading model, and then we predict stock prices in the next week. So, too few data may lead to poor performance in the directional and performance predictions.

In the aspect of transaction cost, it is unexpected that DNN models, especially MLP, DBN, and SAE, have stronger adaptability to transaction cost than traditional ML models. In fact, the higher PR of MLP, DBN, and SAE indicate that they can identify more trading opportunities with higher positive return. At the same time, DNN model can adapt to the changes of transaction cost structures well. That is, compared with traditional ML models, the reduction of ARR and ASR of DNN models are very small when transaction cost increases. There especially is no significant difference between the MDD of DNN models under most of transaction cost structures and that without considering transaction cost. This is further proof that DNN models can effectively control downside risk. Therefore, DNN algorithms are better choices than traditional ML algorithm in actual transactions. In this paper, we divide transaction cost into transparent transaction cost and implicit transaction cost. In different markets, the impact of the two transaction cost on performance is different. We can see that transparent transaction cost is a larger impact than implicit transaction cost in SPICS while they are just the opposite in CSICS, because the prices of SPICS are higher than that of CSICS. While we have taken full account of the actual situation in real trading, the assumption of transaction cost in this paper is relatively simple. Therefore, we can consider the impact of opportunity cost and market impact cost on trading performance in future research work.

This paper makes a multiple comparative analysis of trading performance for different ML algorithms by means of nonparameter statistical testing. We comprehensively discuss whether there are significant differences among the algorithms under different evaluation indicators in both cases of transaction cost and no transaction cost. We show that the DNN algorithms have better performance in terms of profitability and risk control ability in the actual environment with transaction cost. Therefore, DNN algorithms can be used as choices for algorithmic trading and quantitative trading.

8. Conclusion

In this paper, we apply 424 SPICS in the US market and 185 CSICS in the Chinese market as research objects, select data of 2000 trading days before December 31, 2017, and build 44 technical indicators as the input features for the ML algorithms, and then predict the trend of each stock price as trading signal. Further, we formulate trading strategies based on these trading signals, and we do backtesting. Finally, we analyze and evaluate the trading performance of these algorithms in both cases of transaction cost and no transaction cost.

Our contribution is to compare the significant differences between the trading performance of the DNN algorithms and the traditional ML algorithms in the Chinese stock market and the American stock market. The experimental results in SPICS and CSICS show that some traditional ML algorithms have a better performance than DNN algorithms in most of the directional evaluation indicators. DNN algorithms which have the best performance indicators (WR, ARR, ASR, and MDD) among all ML algorithms are not significantly better than those traditional ML algorithms without considering transaction cost. With the increase of transaction cost, the transaction performance of all ML algorithms will become worse and worse. Under the same transaction cost structure, the DNN algorithms, especially the MLP, DBN, and SAE, have lower performance degradation than the traditional ML algorithm, indicating that the DNN algorithms have a strong tolerance to the changes of transaction cost. Meanwhile, the transparent transaction cost and implicit transaction cost are different impact for the SPICS and CSICS. The experimental results also reveal that the transaction performance of all ML algorithms is sensitive to transaction cost, and more attention is needed in actual transactions. Therefore, it is essential to select the competitive algorithms for stock trading according to the trading performance, adaptability to transaction cost, and the risk control ability of the algorithms both in the American stock market and Chinese A-share market.

With the rapid development of ML technology and the convenient access to financial big data, future research work can be carried out from the following aspects: (1) using ML algorithms to implement dynamic optimal portfolio among different stocks; (2) using ML algorithms to do high-frequency trading and statistical arbitrage; (3) considering the impact of more complex implicit transaction cost such as opportunity cost and market impact cost on stock trading performance. The solutions of these problems will help to develop an advanced and profitable automated trading system based on financial big data, including dynamic portfolio construction, transaction execution, cost control, and risk management according to the changes of market conditions and even the changes of investor’s risk preferences of over time.

Data Availability

We have shared our data availability (software codes and experimental data) in a website and can be found at https://figshare.com/account/articles/7238345.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (nos. 71571136, 61802258), in part by Technology Commission of Shanghai Municipality (no. 16JC1403000).

Supplementary Materials

The supplementary materials submitted along with our manuscript include program codes of every algorithm, datasets, and the main result of this work. The materials have been uploaded to the Figshare database (https://doi.org/10.6084/m9.figshare.7569032). (Supplementary Materials)