Abstract

While mobile networks provide many opportunities for people, they face security problems huge enough that a firewall is essential. The firewall in mobile networks offers a secure intranet through which all traffic is handled and processed. Furthermore, due to the limited resources in mobile networks, the firewall execution can impact the quality of communication between the intranet and the Internet. In this paper, a performance evaluation mathematical model for firewall system of mobile networks is developed using queuing theory for a multihierarchy firewall with multiple concurrent services. In addition, the throughput and the package loss rate are employed as performance evaluation indicators, and discrete-event simulated experiments are conducted for further verification. Lastly, experimental results are compared to theoretically obtained values to identify a resource allocation scheme that provides optimal firewall performance and can offer a better quality of service (QoS) in mobile networks.

1. Introduction

Due to the rapid development of mobile Internet technologies and the propagation of networks, the mobile Internet is progressively becoming significant part of everyone’s personal and professional lives. Consequently, the security issues of mobile networks tend to attract more attention. Packet-based filtering in traditional firewalls fails to satisfy the security requirements of the end users. Being an essential component of the entire trusted intranet, a traditional firewall suffers from the disadvantages of simple packet filtering and lower-layer processing. Compared to traditional firewalls, new firewalls are capable of comprehensive data analysis at the application layer, which defines higher-level processing and more comprehensive protection.

Although new firewalls are able to provide a trusted intranet with better security, they consume large amounts of time for analyzing and processing the network packets because they require higher-level assessments and analysis. Furthermore, heavier traffic may reduce the processing efficiency. It makes the firewall prone to becoming a performance bottleneck for the communication between the intranet and the Internet, consequently reducing the quality of service (QoS) of mobile networks. To overcome this problem, limited system resources (such as number of threads and CPU) must be rationally assigned when developing a firewall. Modifying the assignment of resources may directly affect the overall system performance. In general, when developing systems such as firewall, the corresponding firewall performance test needs to be performed. The throughput of the current system needs to be tested and performance optimization be conducted by software engineers. Further tests should be carried out until the optimal resource allocation scheme is obtained. This method for testing and performance tuning is very time-consuming and complex. However, if a mathematical model is constructed to test the common performance indicators of the firewall, such as throughput and packet loss rate, considerable savings in developer labor costs may be achieved, and the optimal resource allocation system may be derived directly.

In this paper, a performance evaluation model for the firewall system of mobile networks is proposed using the queuing theory. Generally, this model can be divided into two phases. Phase one involves handling the traffic at the lower layers, including examinations at the network layer and the transmission layer. Phase two involves handling the upper-layer applications, consisting of HTTP, IMAP, and DNS. Each phase involves concurrent processing with multiservice stations. A discrete-event simulated experiment is conducted to validate the model.

The rest of the paper is organized as follows: Section 2 reviews the current state of related research and designates the innovations proposed in this paper. Section 3 discusses the mathematical modeling of a firewall system for mobile networks. Section 4 describes the simulated experiments performed using the developed model and analyzes the results of conducted experiments. Finally, Section 5 summarizes the results and conclusions.

With the increasing complexity of mobile network security issues, many researchers have been studying and analyzing firewall equipment. As the methods and means of network attacks continue to increase [1], the functions of firewall systems are being enhanced and extended gradually, for example, intrusion detection, DDoS attack detection, and user-behavior detection [2, 3]. Moreover, a number of researchers have conducted network security analysis. Peng et al. mainly performed analysis with respect to user behaviors [46]. Chen et al. provided a survey of security problems in mobile networks [7]. A resource optimization approach in mobile networks was proposed in [8] under imperfect prediction.

The queuing theory, which has significant potential in terms of its application to the mathematical modeling of network equipment, may be useful for analyzing the performance of such equipment [9]. Some researchers have conducted modeling and analyzing of systems such as firewalls based on this theory, and they have made certain relatively advanced achievements in the study of network equipment performance. Herrmann constructed a vacation model for network service analysis, where batch arrival is used rather than the common Poisson flow [10]. Furthermore, in some studies, large-scale cloud services [11] have been modeled into a complex processing system with finite queues [1214]. Salah et al. made considerable achievements by proposing a service system with a finite queue and single window, where packets arrive in a Poisson flow pattern and the service time follows a negative exponential distribution [15, 16]. Salah [17] found that the differences between the applicability of the Erlang distribution modeling and that of a Markov chain modeling have been compared. Zapechnikov et al. conducted an in-depth study based on Salah’s work. They proposed a two-phase service system, where Phase 1 was based on a Markov chain model and Phase 2 employed a hyper-Erlang distribution model [18]. Some researchers have also investigated cloud-service-based firewall systems [19, 20].

Several recent studies have only compared modeling methods, while some have merely discussed the handling process of a single-service window or conducted simple hierarchical modeling [21]. However, neither can describe the firewall workflow nor depict its handling process in detail. With the advances in science and technology, the widespread use of multicore processors and the increased availability of resources in terms of equipment, it is becoming difficult for the above-mentioned methods to fully exploit hardware performance. In this paper, an in-depth examination is conducted with respect to these issues. Further, based on firewall handling and processing characteristics, a two-phase multiservice station [22] and multiprotocol firewall model with multiple concurrent applications is proposed, which is further employed to analyze the overall system performance [23].

3. Model Analysis

Using an Erlang queuing model, this paper proposes a two-phase multiservice station and multiprotocol firewall model with multiple concurrent applications. This model may be hierarchically divided into two phases. When a packet enters the system, it will first be processed on the basis of the rules at the network layer and the transmission layer and then handled by the DNS, FTP, and HTTP protocols. The process flow varies slightly with the characteristics of various protocols. The hierarchy of the firewall model is shown in Figure 1.

The related parameters with respect to the model are defined as follows.

denotes the arrival rate of packets that arrive at the firewall. In Phase 1, represents the phase 1 buffer, denotes the number of service windows for Phase 1, is the number of rules for Phase 1, and is the processing rate of each rule. Phase 2 is divided into three main parts, namely, DNS, FTP, and HTTP. Here, , , and denote the proportions of DNS, FTP, and HTTP traffic, respectively. In the DNS handling part, denotes the buffer, represents the number of service windows, represents the number of rules, and is the handling rate of each rule. The FTP handling component consists of FTP command handling operations and FTP data transmission handling operations. In terms of the FTP command handling part, denotes its buffer, represents its number of service windows, is the number of rules, and represents the handling rate of each rule. The FTP data transmission handling part may be divided into multiple transmissions. For example, concurrent FTP data transmission handling operations are shown in Figure 2, where the traffic proportion is . Further, represents the buffer for data transmission handling, denotes the number of service windows for data transmission handling, is the number of rules for data transmission handling, and denotes the handling rate of each rule. The HTTP part consists of HTTP-request line handling, HTTP-header handling, and HTTP-body handling. Here, denotes the traffic proportion of each HTTP application; , , and denote the buffers for HTTP line handling, header handling, and body handling for each application, respectively; , , and represent the number of service windows for HTTP-request line handling, header handling, and body handling for each application, respectively; , , and denote the number of rules for HTTP line handling, header handling, and body handling for each application, respectively; and , , and denote the handling rate of each rule for HTTP line handling, header handling, and body handling for each application, respectively.

The firewall model proposed in this paper refines a number of essential upper-layer handling processes in which the traffic flow proportion or number of applications may be easily modified. In addition, this model comprises the commonly used handling processes for DNS, FTP, and HTTP, each of which may be extended as an individual template for other upper-layer application protocols.

3.1. Low-Level Analysis

As shown in Figure 1, the entire model consists of multiple single-buffer and multiconcurrent Erlang processes. Hence, the computation of the entire model may be based on cumulative computations of individual submodules. Packets enter the system sequentially, following a Poisson flow pattern; then, they wait in the buffer to be further handled by the concurrent service windows. Each service window comprises multiple subservice stations that correspond to each rule. The handling efficiency of the service windows follows an Erlang distribution. The structure of the submodules is shown in Figure 2.

The following key formula may be derived on the basis of the results of the previous work.

The distribution density function of the service time may be denoted by

Further, denotes the probability that packets arrive at the system during the service time of a packet:

Hence, the following formula can be obtained:

In addition, represents the state transition probability, that is, the probability that the number of network packets in the system changes from to at any arbitrary moment. The transition of states is correlated with the number of packet arrivals within the service time; hence, the relationship between and may be obtained as follows:

In terms of the steady-state probability of the system following the instantaneous exit of the packets , the following relationship exists amongst various states:

The following may be derived:

According to the regularity, we can derived

The load offered at the network layer is given by

The packet loss rate is denoted by

And the throughput is expressed by

3.2. Phase 1 Analysis

As shown in Figure 1, it can be learned that Phase 1 hierarchy is consistent with the subhierarchy described above, where the arrival rate of the packets is , the buffer size is denoted by , the number of service windows is represented by , the number of rules is , and the service rate of the rules is indicated by .

By applying the conclusion drawn in the previous section, it can be learned that the probability that packets arrive at a queue within the service time of a packet may be expressed by

The state transition probability for Phase 1 is given by

The following formula may be obtained:

By performing ratio computation with , we obtain

Phase 1 packet loss rate is given by

Phase 1 throughput is given by

3.3. Phase 2 Analysis

Phase 2 analysis is comparatively complex. It involves handling operations implemented by DNS, FTP, and HTTP. Phase 2 throughputs, that is, the overall throughput of the system, are jointly comprised of the throughputs from DNS, FTP, and HTTP.

3.3.1. DNS Handling Analysis

As shown in Figure 1, the DNS handling process is based on a single-phase concurrent Erlang model. If applications with similar handling processes exist in the system, the DNS handling process may be used as a template. During the DNS handling operations, the arrival rate of packets equals Phase 1 throughput , the buffer size is denoted by , the number of service windows is , the number of rules is represented by , and the service rate of the rules is indicated by .

As with Phase 1 handling process, the probability that packets arrive at the queue during the service time of a packet may be expressed by

The state transition probability for DNS handling operations is given by

The following formula may be obtained:

By performing ratio computation with , the following formula is obtained:

The packet loss rate for DNS handling operations is given by

The throughput for DNS handling operations is expressed by

3.3.2. FTP Handling Analysis

As shown in Figure 1, the FTP handling process is based on a two-phase concurrent Erlang model. If applications with similar handling processes exist in the system, the FTP handling process may be used as a template. The FTP handling operations may be divided into two parts. When a packet arrives at the FTP handling process, it will first be processed by the FTP command handling module, followed by the filtering and analysis of the transmitted data.

(1) FTP Command Handling Operations. During the FTP command handling operations, the arrival rate of the packets equals Phase 1 throughput , the buffer size is denoted by , the number of service windows is , the number of rules is represented by , and the service rate of the rules is .

The probability that packets arrive at a queue within the service time of a packet may be expressed by

The state transition probability for FTP command handling operations may be expressed by

The following formula may be obtained:

By performing ratio computation with , the following formula is obtained:

The packet loss rate for FTP command handling operations is given by

The throughput for FTP command handling operations is denoted by

(2) FTP Data Transmission Handling Operations. In terms of FTP data transmission handling operations, assume that there are concurrent data transmission handling modules, and let denote the th data transmission handling module. The packet flow proportion for each data transmission handling module is denoted by . The arrival rate of packets equals the throughput for FTP command handling operations, , the buffer size is , the number of service windows is expressed by , the number of rules is , and the service rate of the rules is represented by .

The probability that packets arrive at a queue within the service time of a packet may be denoted by

The state transition probability for FTP data transmission handling operations is given by

The following formula may be obtained:

By performing ratio computation with , the following formula is obtained:

The packet loss rate for FTP data transmission handling operations is given by

The throughput for FTP data transmission handling operations is denoted by

The total throughput for FTP handling operations is expressed by

3.3.3. HTTP Handling Analysis

As shown in Figure 1, the HTTP handling process is based on a three-phase concurrent Erlang model. If applications with similar handling processes exist in the system, the HTTP handling process may be used as a template. The HTTP handling operations may be divided into three parts. When a packet arrives at the HTTP handling process, it will first be handled by HTTP-request line handling module; then, it will be processed by HTTP-header handling module, and finally, it will be handled by the HTTP-body handling module and delivered to upper-layer applications.

(1) HTTP-Request Line Handling Operations. In terms of HTTP-request line handling operations, the arrival rate of packets equals the throughput for Phase 1, , the buffer size is , the number of service windows is denoted by , the number of rules is , and the service rate of the rules is expressed by .

The probability that packets arrive at a queue within the service time of a packet may be denoted by

The state transition probability for HTTP-request line handling operations is given by

The following formula may be obtained:

By performing ratio computation with , the following formula is obtained:

The packet loss rate for HTTP-request line handling operations is given by

The throughput for HTTP-request line handling operations is denoted by

(2) HTTP-Header Handling Operations. With respect to HTTP-header handling operations, the arrival rate of packets equals the throughput for the HTTP-request line handling operations, , the buffer size is , the number of service windows is represented by , the number of rules is , and the service rate of the rules is .

The probability that packets arrive at a queue within the service time of a packet may be denoted by

The state transition probability for HTTP-header handling operations may be expressed by

The following formula may be obtained:

By performing ratio computation with , the following formula is obtained:

The packet loss rate for HTTP-header handling operations may be expressed by

The throughput for the HTTP-header handling operations may be expressed by

(3) HTTP-Body Handling Operations. In terms of HTTP-body handling operations, assume that there are concurrent HTTP-body handling modules and let denote the th HTTP-body handling module. The packet flow proportion for each HTTP-body handling module is denoted by . The arrival rate of packets equals the throughput for the HTTP-header handling operations, , the buffer size is , the number of service windows is expressed by , the number of rules is , and the service rate of the rules is represented by .

The probability that packets arrive at a queue within the service time of a packet may be denoted by

The state transition probability for HTTP-body handling operations is given by

The following formula may be obtained:

By performing ratio computation with , the following formula is obtained:

The packet loss rate for HTTP-body handling operations may be denoted by

The throughput for HTTP-body handling operations may be expressed by

The total throughput for HTTP-body handling operations may be expressed by

The overall traffic handled by the firewall should be the sum of the traffic handled by DNS, FTP, and HTTP. The overall packet loss rate of the firewall is given by

4. Experiment Evaluation

This section describes a discrete-event simulated experiment conducted with respect to the firewall model described above. The arrival time of packets and processing time of service windows are stochastically generated, where the arrival process is a Poisson process and the service process follows an Erlang distribution.

Let the number of total resources be 12, enumerate the combinations of the allocated resources, compute the values of the throughput and packet loss rate under each combination using the theoretical formulae and simulation programs, respectively, and calculate the error between the theoretical and simulated values.

The total number of service stations is 12, and the arrival rate of data packets = 80 kpps (kilo packets per second). In Phase 1, the buffer size = 100, the number of rules = 5, and the handling rate = 250 kpps. In Phase 2, the DNS traffic accounts for 20%; FTP traffic, 20%; and HTTP traffic, 60%. In terms of the DNS handling part, the buffer size = 100, the number of rules = 5, and the handling rate = 200 kpps. In the FTP handling part, the FTP command handling buffer size = 100, the number of rules = 5, and the handling rate = 350 kpps. The FTP data transmission handling part may be divided into two data transmission processes, whose traffic proportions are 40% and 60%, respectively. For the first data transmission process, the buffer size = 80, the number of rules = 5, and the handling rate = 150 kpps. For the second transmission process, the buffer size = 90, the number of rules = 5, and the handling rate = 180 kpps. In the HTTP handling part, the buffer size for HTTP-request line handling = 100, the number of rules = 5, and the handling rate = 500 kpps. The buffer size for HTTP-header handling = 150, the number of rules = 5, and the handling rate = 200 kpps. The HTTP-body handling operations consist of two HTTP applications, whose traffic proportions are 40% and 60%, respectively. The buffer sizes are = 100 and = 100, respectively, the number of rules is = 5 and = 5, respectively, and the handling rates are = 80 kpps and = 100 kpps, respectively. The results are presented in Table 1.

As determined from the experimental results, when the resources are allocated as = 2, = 1, = 1, = 1, = 1, = 1, = 2, = 1, and = 2, the maximum throughput reaches up to 76.69 kpps with the lowest packet loss rate. Further, the error between the theoretical computation results and the simulated experiment results remains within 3.3%; the mean error is 0.848%.

5. Conclusion

A two-phase multiservice station and multiprotocol firewall model with multiple concurrent applications was proposed in this paper. Based on different phases, protocols, and applications, the values of performance indicators such as system throughput and packet loss rate were obtained. An optimal solution was derived after the results of simulated experiments and theoretical computation were compared, and the combinations of resource allocations were enumerated using a total of 12 resources. Furthermore, by comparing the error between the simulated experiment values and the theoretical computation values, it was found that this model may precisely represent the handling process of the firewall. Therefore, it may save a considerable amount of time in development and testing from the viewpoint of utilizing mobile networks, thus exhibiting significant potential for practical application. In the future, we plan to continue with our in-depth research by emphasizing the discussion of DDoS detection and user-behavior analysis.

Disclosure

The funding agency had no role in the study design, the collection, analysis, or interpretation of data, the writing of the report, or the decision to submit the article for publication.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Key Research and Development Program (2016YFB0801503).