Abstract

LoRa is an IoT communication technology that realizes ultra-long-distance transmission through spread spectrum modulation. However, its ultra-long-distance transmission also sacrifices the corresponding rate, and data conflicts are prone to occur when the number of nodes is large. In this article, we investigate various types of data collisions in LoRa wireless work, most of which are affected by Spreading Factor (SF) assignment. At present, the distribution of the SF for LoRa in the industry is mostly based on Min-airtime and Min-distance. In the case of a large number of nodes, the data collision between nodes will increase sharply. This paper proposes a SF redistribution scheme under limited network resources, in order to improve the terminal capacity of the LoRa gateway. First, the problem of minimizing the data collision rate without expanding gateway or network resources is presented. Specifically, the reallocation of SF with increasing number of terminals is studied. Finally, considering the randomness of the data sent by the terminal, SF redistribution schemes based on deep reinforcement learning (DRL) are developed. The simulation results show that the collision rate of the proposed SF redistribution scheme is nearly 30% lower than Min-airtime and Min-distance, and its total energy consumption is close to Min-distance. Therefore, the proposed SF redistribution scheme can effectively improve the gateway capacity of LoRa wireless network.

1. Introduction

In recent years, the Internet of Things industry has developed rapidly, existing mobile cellular communication technology cannot meet the communication requirements of long-distance, low power consumption, and large connection of IoT node equipment [1, 2]. In this context, low power wide area network (LPWAN) [3] came into being, which is a general term for a communication technology suitable for long distance, low power consumption, low bandwidth, and multiconnection IoT connections [4]. LPWAN includes LoRa, NB-IoT [4], RPMA [5, 6], Sigfox [7, 8], LTE-M [9, 10], and other wireless communication technologies [11, 12]. Among them, it can be divided into two categories according to whether authorization is required. In the unlicensed frequency band, LoRa has been widely used in the field of IoT since its invented due to its long transmission distance and low power consumption. Compared with NB-IoT, which requires operator authorization, its on-demand deployment and low deployment cost also make it favored by many organizations and companies with ad hoc [13] network needs. Especially in scenarios with weak signal, long transmission distance and low power consumption requirements, LoRa has more advantages than other communication technologies [14].

LoRa [15, 16] achieves super anti-interference and long-distance transmission through spread spectrum modulation technology. This technology trades bandwidth for sensitivity [17] and is used in communication technologies such as WiFi [18, 19] and Zigbee [20]. LoRa modulation is characterized by maximizing sensitivity, even approaching the limit of Shannon’s theorem [21]. While LoRa achieves such long-distance transmission, it also sacrifices some speed. When the anti-interference ability is stronger and the transmission distance is longer, the amount of data that can be transmitted per unit time is less. In this process, the main parameter of the SF plays an important role [22].

At present, the mainstream method for setting the SF is based on Min-distance and Min-airtime [23]. The two methods determine the optimal SF selection based on the optimal selection of distance and transmission time, respectively. Therefore, neither of the two methods considers the correlation between nodes when the node data is large or the data sending time is close, so the data conflict will increase as the number of nodes increases. How to reduce the data collision rate between LoRa nodes has become a major concern in this paper.

In this paper, we develop a DRL-based LoRa SF allocation optimization method to dynamically optimize the SF allocation of LoRa nodes, thereby reducing the occurrence of collisions between nodes. To showcase the efficiency, we compare the proposed DRL-based method with traditional node feature-based SF assignment methods. Contributions can be summarized as follows: (i)We first study the performance characteristics of nodes under different SFs for the LoRa collision problem. For different SFs, virtual simulation scenarios are built to simulate the characteristics of transmission under different SFs. Through comparative analysis, it is shown that the optimal rate does not necessarily have the lowest collision rate in the selection of the SF(ii)Aiming at the existing algorithm for selecting SF based on feature, this paper proposes to consider the influence of channel environment on SF. An optimization algorithm based on DRL framework is proposed. The algorithm considers selecting the optimal SF by combining the node’s own characteristics and channel environment information. Considering that the gateway cannot know whether there is a collision between nodes, we consider whether the node is retransmitted to determine whether the node collides or loses packets. Finally, we redesign the state and action parameters of the algorithm(iii)On the simulation platform, we compare the performance of the DRL-based SF optimization algorithm with the feature-based Min-distance and Min-airtime algorithms. When the number of nodes reaches 1000, the optimization algorithm we propose collides with each other. The rate is reduced by nearly 30%, and the total energy consumption is close to the Min-distance

The rest of the paper is organized as follows. Section 2 presents related works. Section 3 provides the problem formulation and system model. Section 4 proposes DQN based SF allocation. Section 5 elaborates the numerical results, and finally, Section 6 summarizes the conclusion and future work.

At the beginning of the invention of LoRa, the node uses the pure ALOHA [24] protocol to send data, and the node does not perform channel detection but sends it directly. In this way, as the number of terminals increases or the number of sent packets increases, the probability of packets from multiple terminals colliding on the channel is greatly increase. Since the LoRa mechanism is too simple, on the basis of LoRa, the LoRa Alliance has launched the LoRaWAN protocol [25, 26]. In LoRaWAN, a duty cycle is proposed to constrain the node to occupy the channel all the time, thus avoiding data conflict to a certain extent, but in the case of a large amount of data, due to the duty cycle, the data delay will be increased. In the case of a large amount of data concurrency, its advantages are not obvious. In addition, LoRaWAN also introduces a CAD [27] mechanism to reduce the probability of LoRa conflict, but this undoubtedly increases the power consumption of LoRa. The most important thing is that this does not improve the situation of LoRa nodes crowding the channel and increase the capacity of the gateway. At present, the Class A mode in LoRaWAN still uses pure ALOHA. According to calculations, the channel utilization of the pure ALOHA protocol is only 18.4%, and most nodes send collisions in the channel. Therefore, how to improve LoRa to increase its gateway capacity and reduce the collision rate has become a major research point in the industry.

To this end, the industry has also done a lot of work and research on LoRa collision optimization. Edward et al. [28] expect to increase the capacity of the gateway by introducing Interleaved Chirp Spreading LoRa- (ICS-LoRa-) based modulation. In [29], The authors optimize the transmission parameters of a LoRaWAN system in high density smart city traffic environment using golden section search and parabolic interpolation. Floris et al. [30] used ns-3 to simulate and analyze the LoRa network. The analysis shows that increasing gateway density can ameliorate but not eliminate this effect, as stringent duty cycle requirements for gateways continue to limit downstream opportunities. Reynders et al. [31] present a scheme to efficiently optimize the packet error rate fairness inside a LoRaWAN cell. This is achieved by optimizing the power and SF for each node while avoiding near-far problems by allocating distant users to different channels. In [32], Abdelfadeel et al. present results of a study of the data rate fairness among nodes within a LoRaWAN cell. In order to make the rate of each node more relatively fair, they secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. But this algorithm only considers the case where the set of nodes is close to the network.

As a new intelligent decision-making algorithm, AI has been widely used in many fields. In network resource scheduling, many scholars have also introduced AI for scheduling and decision optimization. Jiang et al. [33] used reinforcement learning to optimize the throughput and transmission time interval of NB-IoT. Yang et al. [34] utilize deep neural network (DNN) to configure optimized NOMA for network resource management. The DNN not only greatly improves the computational efficiency but also improves the summation rate of the system. In [35], a DQN method is used to control the Handover (HO) procedure of the User Equipments (UEs) by well capturing the characteristics of wireless signals interference and network load. Experimental results show that the proposed scheme can reduce HO rate and guarantee the system throughput, which is better than the traditional HO scheme.

In this article, we consider using the reinforcement learning algorithm to further improve and optimize the distance-based optimization algorithm and redistribute the SF of the nodes to reduce the collision rate of the channel with the growth of nodes or data explosion. Finally, increase the capacity of channel nodes under a single gateway. In Section 3, we will first introduce LoRa’s communication collision model and the problem formulation will be presented as follows.

3. System Model and Problem Analysis

3.1. LoRa Communication Model

On the basis of the LoRa physical layer, the LoRa Alliance released the LoRaWAN protocol to adapt to the LoRa physical layer. In LoRaWAN [36], Class A mode must be implemented by default, and it mainly uses the ALOHA protocol for data transmission [37]. The basic idea is that each node can send data frames at any time, and then monitor the channel to see if there is a conflict. If a conflict occurs, then the node will wait for a random period of time to retransmit until the retransmission is successful. The collision process is shown in Figure 1.

In LoRa, there are four main parameters that generally affect the conflict between data, namely center frequency, SF, Bandwidth(BW), and Coding Rate(CR). Through reasonable configuration, there can be a variety of mutually orthogonal combinations to avoid conflicts [38]. The four parameters are described as follows (1)CR. The coding rate is the ratio of the useful part of the data stream. LoRa uses cyclic error correction coding for forward error detection and error correction. However, using this method will generate transmission overhead. The specific overhead is shown in Table 1(2)Center Frequency. The frequency in the middle of the filter passband, in LoRa, the corresponding frequency needs to be set according to laws and regulations. There is a random frequency hopping mechanism in LoRaWAN. This mechanism is mainly to deal with the regulations of LoRa duty cycle, so as to solve the problem of transmitting large data packets. This mechanism can also be effective in reducing collisions at certain times(3)BW. The signal bandwidth is to limit the lower frequency and upper frequency of the signal allowed to pass through the channel. In LoRa, increasing the BW can improve the transmission rate of the payload, but it will also reduce the receiver sensitivity while reducing the transmission time(4)SF. LoRa spread spectrum modulation is realized by representing each bit of data in the payload information with multibit chip information. Since different SFs are orthogonal to each other, the SFs must be known in advance in a group of transceiver links

The modulation modes of LoRa signals are orthogonal to each other under different SF and BW combinations, and data transmission can be performed using Code Division Multiple Access (CDMA). In the same channel, if the BW is unchanged, the multichannel orthogonal data can be transmitted without interfering with each other by changing the SF. The SF ranges from 5 to 12, and a total of 8 addresses are used. In LoRaWAN, a total of six addresses are used from 7 to 12, and the rate corresponding to each multiple access is also different. The speed will affect its air flight time, and the air flight time can be calculated by the following formula.

is the air time, and is the number of symbols. Due to the different modulation parameters, the calculation of the number of symbols is also different, and the specific calculation is given in the following formula.

For , and

For other SF,

When CRC is turned on, , otherwise it is 0. in explicit header mode, 0 in recessive header mode. represents the byte payload. The air interface time required for different SFs and different packet lengths can be calculated through the above calculation formula. The air time is shown in Figure 2.

If the air time of transmission is longer, it means that the time of occupying the channel is longer. At this time, if other nodes in LoRa transmit with the same SF, the data of the two nodes will interfere with each other, which will cause the gateway to fail to receive. At this time, the node will retransmit when no response is received. The larger the SF, the smaller the capacity of the channel and the greater the probability of collision. How to make a reasonable distribution of the SF becomes a problem we are concerned about. In the next subsection, we will further analysis of the success rate of data reception by LoRa’s SF.

3.2. Problem Analysis and Description

In LoRa, there are four main factors that affect the collision of node data, namely, Frequency Collision, SF Collision, Power Collision, and Timing Collision. SF is an important parameter of LoRa communication transmission. The effect of setting different SFs on collision is particularly obvious. In order to verify the impact of LoRa nodes using different SFs on the success rate of network data transmission, this paper uses the Python simulation simulator LoRaSim to simulate the collision of LoRa nodes and verify the impact of different SFs on the success rate of LoRa gateway data reception. Nodes are randomly distributed within a radius of 2 kilometers with the gateway as the center. The spreading factor , which ensures that each node can communicate with the LoRa gateway normally. The packet sending interval of each node is 5 minutes; the packet load length is 20 bytes; the bandwidth is 125 kHz, and the number of gateway channels is 1. The total simulation time is 2 hours. Finally, under different SFs, as the number of nodes increases, the graph of the successful data reception rate is obtained, as shown in Figure 3.

In Figure 3, there are differences in the data transmission success rates corresponding to different SFs under the same number of nodes, which shows that the low SF has a higher data transmission success rate, while the high SF has a lower data transmission success rate. As the number of nodes increases, the success rate of data transmission is also affected by the SF. When , the number of nodes exceeds 300; the success rate of data transmission is already lower than , which seriously affects the reliability of the network.

Since a higher SF has a stronger anti-interference ability, the transmission of the same size of data needs to occupy the channel for a longer time, and the energy consumption is also higher. Additional energy consumption is also required for data retransmission due to packet loss or data collision. Figure 4 shows the energy consumption of data transmission with different SFs. Therefore, when the data is reachable, a lower SF is generally preferred for transmission.

The optimal SF transmission has its advantages, but in the case of a large number of nodes, if many nodes select the optimal SF, the collision rate of the optimal SF will increase. Due to the orthogonal relationship between the SFs, the LoRa gateway can receive a variety of node data with different SFs or bandwidths at the same time. The communication multiple access that LoRa can be expressed is as follows:

In the actual deployment situation, letting all nodes select the optimal SF may increase the collision rate instead. With the help of LoRa’s feature that different SFs can be demodulated at the same time, how to allocate SFs more reasonably, so as to maximize the utilization of network resources and reduce the collision rate of nodes can be described as follows:

The formula describes that the optimal SF allocated should be the smallest collision rate and no packet loss. In Formula (5), , is the number of nodes; and are reduction coefficients. In order to prevent packet loss as much as possible, gamma weights should be larger.

4. SF Allocation Based on DRL

A reinforcement learning algorithm is an AI algorithm that optimizes itself according to changes in the environment. It is mainly composed of agents and environments. The agent observes some state parameters required by itself from the environment, and gives corresponding action parameters according to the state parameters. At present, reinforcement learning algorithms have shined in many fields, such as game AI, autonomous driving, etc.

In order to optimize the distribution of the SF, in this section, the reinforcement learning algorithm combined with the distance optimization algorithm of LoRa is proposed to be embedded in the LoRa network server. Figure 5 shows a schematic diagram of the Deep Reinforcement Learning (DRL) algorithm embedded in the network server.

In Figure 5, the AI algorithm is designed to be embedded in the network server, and the LoRa gateway performs the forwarding function of the node. The LoRa node sends uplink data based on the ALOHA mechanism. Only when the data does not collide in the channel can the LoRa gateway successfully receive the data sent by the node (Data) and forward it to the network server, which also includes the channel environment information (Channel INF); after the network server receives the data, it will parse it; through the environment information recorded by each node (whether retransmission occurs, sending time, etc.), it determines if there is a collision between nodes. The AI algorithm will give the corresponding adjustment strategy according to the node information and channel information. The strategy will be forwarded through the gateway, and the gateway will send the MAC command when the node’s receiving window is opened, so as to adjust the channel parameter settings of each node to maximize the use of each channel resources and reduce the collision rate of data.

4.1. Deep Reinforcement Learning Algorithm Model

The deep reinforcement learning algorithm plans to use the Deep Q Network (DQN) algorithm [39]. The DQN algorithm is a method of approximating the value function of Q-learning through a neural network [40]. Q-learning is a model-free reinforcement learning technique proposed by Watkins in 1989. For a given environmental state, it can have relatively good operational expectations without the need for an environmental model. At the same time, it can handle random transitions and reward issues without adjustment. It has been shown that, for any finite MDP, Q-learning will eventually find an optimal policy, i.e., starting from the current state, the expected value of the total return over all successive steps is the maximum achievable [41]. Before learning begins, is initialized to a possibly arbitrary fixed value. Then, at each time , the agent chooses an action , gets a reward and enters a new state , and the value is updated. Its core is the value function iteration process, namely,

However, when encountering a large number of state spaces or a continuous state, Q-learning will face the disaster of dimensionality or the difficulty of storing rough retrieval, so a neural network is introduced to approximate the value function [42, 43]. After the introduction of the neural network, the whole process becomes how to determine to approximate the value function. In this paper, gradient descent is used to minimize the loss function to debug the network weight . At this time, the Loss Function is defined as follows:

In the given formula, we use two neural networks; one is called the Q neural network, and the other is called the target neural network. The purpose of introducing the target neural network is to reduce the correlation between the current value and the target value, thereby improving the stability of the algorithm. Specifically, is the target network parameter of the iteration, and is the current Q neural network parameter.

In addition, experience playback is also introduced to store the past state, and the method of random sampling from the experience pool is adopted to update the neural network parameters, thereby breaking the correlation between data and improving the utilization of data. The learning process of DQN is shown in Figure 6.

4.2. DQN Parameters Design

In this section, we will introduce how DQN integrates and interacts with LoRa’s network server, including some specific parameters of the environment state as well as the action parameters generated by DQN.

4.2.1. State

The LoRa network server receives the packet data in the LoRaWAN format from the gateway. The data packet includes the node ID, packet length, transmission interval, SF, encoding rate, and bandwidth. Each node has a unique ID number. From the node number information, the network server can know the location of each node and the distance dis from the forwarding gateway. At the same time, it is also possible to know whether retransmission has occurred from the information of the data packet. If the data packet is retransmitted, it means that the data packet has a data collision or is lost. Then, the state of the environment can be represented as a 7-dimensional vector.

represents the distance between the gateway and the node device. This data is added to the network server when the node is initialized to join the network for the first time. The SF is an integer of 7-12, and the BW optional frequency bands are 125, 250, and 500.

4.2.2. Action

The action parameter consists of two parts, namely, the selection of the SF and the selection of the bandwidth. There are a total of 18 combinations of the two to choose from. So the output of the neural network is an 18-dimensional vector, and the one with the largest value is selected from the 18-dimensional vector.

4.2.3. Reward

In reinforcement learning, the agent’s goal is formally represented as a special signal, called a reward, which is passed to the agent through the environment. At each instant, the reward is a single scalar value. In this paper, the expected maximum goals are the collision rate between nodes and the packet loss rate of nodes. The reward function in this paper is set as follows: where

The goal of RL is to maximize the cumulative discounted reward functions by finding an optimal policy. We then define long-term reward as the accumulated and discounted reward.

is the reduction coefficient, which belongs to . When the reduction coefficient is 0, it means that only current interests are considered. When the reduction coefficient is larger, it means that longer-term interests are considered.

At the end of this section, we present the procedure of the DQN-based SF allocation optimization algorithm. The algorithm will be shown in Algorithm 1.

Initialize LoRa node based on LoRaWAN
Initialize replay memory D to capacity N
Initialize action-value function with random weights
Initialize target action-value function with weights
For episode =1,M do do
  Initialize and choose state
   from LoRa network server(MAC)
  For do
   With probability select a random action otherwise select execute action in emulator
   Observe reward and next state
   Store experience
   
   If episode terminates at step j +1 then
    
   Else
    
   End if
   Perform a gradient descent step on with respect to the network parameters
   If batch size > = memory capacity then
    Update
   End if
  End for
End for

5. Performance Evaluation

In this section, we will first introduce the LoRa network environment and the settings of related parameters, and secondly, we will optimize the algorithm model for training based on the LoRaSim simulator. Finally, we will give a performance comparison of LoRa based on different algorithms.

5.1. Parameter Settings

We will use the Python-based LoRa simulator LoRaSim to simulate the LoRa network communication environment. In LoRaSim, the process of communication between multiple nodes and gateways is simulated through the SimPy-based discrete event library, and each node and gateway is maintained by a thread. When each thread is simulating sending packets, a collision function will be used to simulate whether a collision occurs. The settings of the collision function include SF collision, frequency collision, and time collision. In order to be as close to the real communication environment as possible, the parameters of the network environment will be set according to the LoRaWAN protocol. The specific environment parameter settings are shown in Table 2.

For the neural network model, considering that the number of states and actions are not large, the two neural network models in DQN adopt a three-layer network model with 50 neurons in each layer. Other algorithm parameters are shown in Table 3.

In the training simulation environment of the algorithm model, we set 1500 nodes and initialize the default allocation of ; the data volume of each node is 50 bytes; the sending interval is 5 minutes, and the total simulation time is 2 hours. In order to prevent overfitting, the number of rounds is taken as the first 10000. Three different learning rates are set, and the training curve of the algorithm is presented in Figure 7.

As shown in Figure 7, when the learning rate is 0.001 and 0.01, the training requires a large number of rounds to achieve a good effect, and after the algorithm convergence, its collision rate is still higher than that of the algorithm with a learning rate of 0.1. Therefore, we choose a learning rate of 0.1.

5.2. Algorithm Performance and Comparison

In the LoRa simulation experiment in this article, we will select two widely used algorithms for comparison, namely, Min-distance and Min-airtime. The Min-distance allocation strategy allocates the SF according to the range of the RSSI value received by the gateway. A low RSSI value corresponds to a high SF, and a high RSSI value corresponds to a low SF. This method uses a low SF to improve the success rate of data transmission but does not fully consider the orthogonality between SFs. It assigns almost all nodes to and . The Min-airtime allocation strategy is to adaptively select the minimum combination of air transmission time corresponding to bandwidth, SF and coding rate according to the situation of the node itself. Compared with Min-distance, this deployment has better flexibility.

In this experiment, the number of nodes in the range from 100 to 1000 will be selected, and the value will be taken every 100 nodes. The node location information is initialized and fixed when entering the network. The transmission time interval is set to 5 minutes, and the packet length is set to 50 bytes. The total simulation time is 12 hours. The DQN algorithm proposed in this paper and the comparison of the two algorithms are shown in Figure 8.

In Figure 8, the DQN algorithm redistributes the SF, and as the number of nodes increases, the data collision rate is significantly reduced compared with the other two algorithms. At 1000 nodes, its collision rate is reduced by 24%. During the same period, node collisions tend to be clustered. If a large number of collisions occur in the short-term , due to the need for retransmission, the collision of data cannot be alleviated in the short-term. And DQN considers using more spreading factors, including some SF options with poor rate, in order to reduce the collision explosion for a period of time. Therefore, compared with the Min-distance and Min-airtime methods, when there are more nodes, DQN can play a more significant role in reducing collisions.

In Figure 9, the energy consumption comparison chart under the three methods is shown. Due to the assignment of a higher SF, a larger energy consumption is required to complete the data transmission. Therefore, its energy consumption is slightly higher than that of Min-airtime, but it is basically the same as that of Min-distance. SF is proportional to energy consumption. Although DQN reduces the collision of nodes, from the perspective of energy consumption, energy consumption increases due to the allocation of higher SF.

DQN also brings a certain degree of packet loss after adjusting the SF. The packet loss problem is due to the assignment of unreasonable SFs to nodes that suffer from strong interference. Min-airtime and Min-distance allocate the SF by calculating in advance to ensure that the node will not lose packets when sending data, and then select the optimal SF. It can also be found in the simulation experiment that the packet loss rate of the two methods is 0. DQN will search for better SF when assigning SF. If an unreasonable SF is selected, packet loss will occur. In the DQN algorithm, the packet loss rate of a node has nothing to do with the number of nodes but is related to the DQN learning exploration rate. The packet loss phenomenon generally occurs in the stage of DQN learning the environment. As the algorithm observes and learns from the environment, the total packet loss rate is not more than 3%. The packet loss rate comparison chart is shown in Figure 10.

In Figure 11, we give the success rate of the gateway receiving data under different methods. The main factor that affects the success rate of Min-airtime and Min-distance data receiving is the conflict between data. In DQN, the algorithm uses the SF. The adjustment has resulted in a certain degree of packet loss but greatly reduced the conflict between data. With the increasing number of nodes, DQN data reception success rate is better than Min-airtime and Min-distance. In the case of 1000 nodes, the data reception success rate of the DQN algorithm is 27% higher than that of Min-airtime and 28% higher than that of Min-distance.

6. Conclusion

Aiming at the problem that LoRa is prone to data collision in large-scale node scenarios, this paper proposes an optimization algorithm for LoRa SF allocation based on deep reinforcement learning. By analyzing the performance of different LoRa SF and the impact on the collision rate, it is found that the collision between nodes can be effectively reduced by reasonably configuring the SF of LoRa nodes. We developed a LoRa SF allocation optimization algorithm based on the DQN algorithm. According to the analysis of the LoRa SF, the environmental state information that has a major impact on the collision rate was selected, and the action information was combined and configured. The simulation results show that as the number of nodes increases, the algorithm we developed can effectively reduce the data collision rate between LoRa nodes, and the energy consumption is close to Min-distance.

Due to exploration characteristics of the DRL algorithm, a small part of the packet loss phenomenon will occur in LoRa nodes. In the future, we will expect to further optimize the LoRa packet loss situation. And with the maturity of satellite-based cooperative communication, the optimization for joining multimode communication will also become a research direction. This kind of multiobjective optimization is very challenging and should be solved in the future.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was sponsored by SZTU-Winoble Cooperation Research Project (No. 2021010802015 and No. 20213108010030); Scientific Research Capacity Improvement Project from Guangdong Province (No.2021ZDJS109); SZTU Experimental Equipment Development Foundation (No. JSZZ202102007).