Abstract

In conventional centralized authorization models, the evaluation performance of policy decision point (PDP) decreases obviously with the growing numbers of rules embodied in a policy. Aiming to improve the evaluation performance of PDP, a distributed policy evaluation engine called XDPEE is presented. In this engine, the unicity of PDP in the centralized authorization model is changed by increasing the number of PDPs. A policy should be decomposed into multiple subpolicies each with fewer rules by using a decomposition method, which can have the advantage of balancing the cost of subpolicies deployed to each PDP. Policy decomposition is the key problem of the evaluation performance improvement of PDPs. A greedy algorithm with time complexity for policy decomposition is constructed. In experiments, the policy of the LMS, VMS, and ASMS in real applications is decomposed separately into multiple subpolicies based on the greedy algorithm. Policy decomposition guarantees that the cost of subpolicies deployed to each PDP is equal or approximately equal. Experimental results show that (1) the method of policy decomposition improves the evaluation performance of PDPs effectively and that (2) the evaluation time of PDPs reduces with the growing numbers of PDPs.

1. Introduction

In the service-oriented architecture (SOA) [15] environment, access control [69] is one significant part of security requirement [1015]. Confronted with the requests of numerous users, all issued resources are required for access control protection by some predominant techniques, such as identity authentication [1620] and dynamic authorization [2125].

In conventional centralized authorization models, an XACML (eXtensible Access Control Markup Language) [2628] policy evaluation engine contains one single policy decision point (PDP) that is responsible for granting/denying the access requests of users. The PDP needs to load a policy set which contains a large number of policies, with each policy consisting of a large number of rules. The evaluation performance of PDP will decrease significantly when the number of rules in a policy increases considerably [29].

Meanwhile, when users try to access resources concurrently, the policy enforcement point (PEP) calls PDP to retrieve an authorization decision. This authorization decision is made through the evaluation of rules in the policy, which is loaded in PDP. Finally, PEP receives the authorization decision (permit/deny). If the number of users who are originating requests concurrently is very large, the time spent on accessing resources will grow evidently. It is because the later users have to wait for the former users before their authorization operations are completed.

The following bottlenecks will occur in the evaluation performance improvement of PDP.(i)The number of rules in a policy and access requests in a concurrent operation are extremely large.(ii)All the access requests have to be sent to one single PDP.(iii)When evaluating an access request, PDP needs to search which rule is applicable among all the rules in a policy.Recently, researches on the evaluation performance improvement of PDP can be categorized into two groups. The first uses distributed authorization models to improve the evaluation performance of PDP. According to the attributes contained within the targets in a policy or a rule, Kateb et al. [29] decomposed the global policy into several subpolicies. Alzahrani et al. [30] proposed an XACML distributed authorization model. In this model, the global policy in the centralized authorization model was decomposed into several subpolicies, which were deployed to corresponding PDPs. In this way, different PDPs can cooperate with each other. Decat et al. [31] proposed to decompose and distribute the tenant policies across provider and tenant in order to evaluate as much parts of the policy near the data they required while keeping the tenant access control data confidential. Craven et al. [32] described a method for policy refinement, in which policy decomposition is the first stage. They achieved policy decomposition by the application of decomposition rules, relating domain elements represented more abstractly to the components and implementations of those elements at a more concrete level.   The second group uses new techniques to improve the evaluation performance of PDP, such as adopting advanced data structures [33] and founding indices or caches based on access records [34, 35]. Liu et al. [33] presented an XEngine, in which the rules embodied in a policy were expressed as integers before requests were evaluated. XEngine transformed the hierarchical tree structure of the XACML policy to a flat structure so that the time of evaluating requests can be effectively reduced. According to the access records of users, Marouf et al. [34] proposed that the relative order of policies or rules was adjusted in order to speed up the time of evaluating requests. Wang et al. [35] presented an XACML policy evaluation engine termed MLOBEE (multilevel optimization based evaluation engine), in which a multilevel optimization technology was adopted. Before evaluating requests, MLOBEE simplified the rules embodied in a policy and reduced the size of policies. In evaluation procedure, MLOBEE adopted a variety of cache mechanisms, such as evaluation result caches, attribute caches, and policy caches. This method can decrease the communication cost between PDP and other modules.

However, the existing studies scarcely consider the internal structure of a policy in decomposition. The cost of subpolicies deployed to each PDP cannot be guaranteed to be equal or approximately equal. This problem may lead to the fact that there might exist an appreciable difference in the evaluation time among PDPs. For example, some subpolicies might be relatively large and some relatively small, which might affect the evaluation performance improvement of PDPs. Therefore, we present a novel distributed policy evaluation engine and propose a decomposition method. In this method, a policy should be decomposed into multiple subpolicies each with fewer rules so that the cost of subpolicies deployed to each PDP is equal or approximately equal. Policy decomposition is the key problem of improving the evaluation performance of PDPs.

This paper makes the following contributions.(i)A novel distributed policy evaluation engine (called XDPEE) is proposed, which has abilities of decomposing policies and distributing requests.(ii)A discrete optimization model of policy decomposition is presented, whose properties are analyzed.(iii)A greedy algorithm with a favorable time complexity for solving the optimization model is constructed.(iv)Comparisons of the evaluation performance of PDPs in XDPEE with that of PDPs in the Sun PDP are made. Also, the evaluation time of XDPEE with different numbers of PDPs is measured. Experimental results show that the method of policy decomposition improves the evaluation performance of PDPs substantially.

The remainder of this paper is organized as follows. Section 2 describes a novel distributed policy evaluation engine. A discrete optimization model of policy decomposition is shown and its properties are analyzed in Section 3. In Section 4, we construct a greedy algorithm for solving the optimization model. Section 5 shows experimental results of the evaluation performance improvement of PDPs. Finally, Section 6 presents some conclusions and directions for our future work.

2. Distributed Policy Evaluation Engine

Our proposed distributed policy evaluation engine termed XDPEE is shown in Figure 1, where a policy decomposition module (PDM) and a request distribution module (RDM) are introduced on the basis of conventional centralized authorization models. Multiple PDPs are founded in order to cooperate with PDM when running. PEP issues access requests to the RDM, which can transmit these requests to the corresponding PDPs according to the information of access requests. If a policy decision process fails, the RDM can retransmit the request to the backup PDP for authorization in order to ensure the robustness of the authorization system.

Two schemes for policy decomposition are as follows.(i)Every PDP loads the same policy set, and the RDM distributes access requests to the idle PDP for processing.(ii)According to its internal structure, a policy (handled by single PDP) is decomposed into multiple corresponding subpolicies (handled by multiple PDPs), each of which contains fewer rules than the original policy. These subpolicies are deployed to PDPs, each of which loads fewer subpolicies.

The second scheme is adopted for better improving the evaluation performance of PDPs.

3. Policy Decomposition

In this section, we will discuss policy decomposition in detail. First of all, the decomposition criteria based on attributes are addressed. Secondly, a discrete optimization model of policy decomposition is presented. Finally, the properties of policy decomposition are analyzed.

3.1. Decomposition Criteria

A policy is configured by specific subjects, actions, and resources, so the number of rules in a policy with the same subjects, actions, and resources may be extremely large. Therefore, the basis of policy decomposition can be the combination of these three attributes. For example, if a policy is decomposed on the basis of the subject attribute in the target element, the rules with the same subject attribute will be distributed to the identical policy.

A policy can be decomposed by the decomposition criteria based on the combination of three attributes: subject, action, and resource [29], as shown in Table 1. The decomposition criteria do not alter policy behaviors of the centralized architectures. For category 1, the basis of decomposition is each of the three attributes separately. For category 2, the basis is any two of the three attributes, and, for category 3, the basis is all the three attributes. We adopt the first category to decompose a policy according to the subject attribute, for the reason that the rules configured for the same user can be distributed to the identical policy. Therefore, the rules which are corresponding to the user (subject) can be obtained efficiently when requests are evaluated.

If a policy is not properly decomposed, the evaluation performance cannot be improved substantially. In this situation, some subpolicies might contain many more rules than other subpolicies. Suppose that a policy is decomposed on the basis of subject and that each subject may be accessed with the same probability, and how to efficiently decompose a policy and make each PDP spend approximately equal time on evaluating requests becomes a key problem.

3.2. Optimization Model

If a policy is decomposed on the basis of subject, both the number of rules and the cost of evaluation corresponding to each subject embodied in a policy are determined. Suppose that the rule set corresponding to the attribute embodied in a policy is and that the cost set corresponding to the rules is ; the total cost of the attribute is shown in

If a policy contains subjects, the cost set corresponding to these attributes is . If there are PDPs, then we will need -partition for Cost and require that the cost of the subsets after decomposition be equal or approximately equal. For example, if 10 subjects are given, the cost set corresponding to these attributes is . If there are two PDPs, we will need 2 partitions for and desire that the cost (or sum value) of each subset after decomposition be equal or approximately equal. The optimal result of decomposition should be and .

The core idea of policy decomposition is that a policy is decomposed into multiple subpolicies according to one of the three categories of decomposition criteria discussed above and that these subpolicies are then deployed, respectively, to several PDPs. We aim at making the cost of subpolicies deployed to each PDP equal or approximately equal; thus, the average evaluation time of these PDPs can be effectively shortened.

The problem of policy decomposition can be summarized as an optimization model as follows. We suppose that a policy is decomposed for -partition on the basis of subject and that the cost set corresponding to the th subject in this policy is . PDPs can be regarded as disjoint sets, expressed in . needs to be distributed, respectively, to . We let if is distributed to ; otherwise we let . In (2), stands for the sum value of the elements in . Formula (3) is the constraint condition for (2) and indicates that can be distributed to only one of :

It is significant that the maximum of must be as small as possible for policy decomposition. Accordingly, the cost of subpolicies deployed to each PDP is equal or approximately equal so that the purpose of improving the evaluation performance of PDP can be achieved. The optimal solution to the optimization model must satisfy the mathematical expression, as shown in

3.3. Properties

The model of policy decomposition is a discrete optimization model, which has the following properties.

Property 1. , corresponding to the optimal solution of policy decomposition, is not unique.

Here is an example. Given for 2 partitions, we can obtain two results of policy decomposition. One is that and that , with , and the other is that and that , with .

Property 2. The optimal solution of policy decomposition is unique.

Proof. Suppose that the optimal solution of policy decomposition is not unique and that there is another optimal solution . If , cannot meet formula (4). If , contradicts the optimal solution.

Property 3. corresponds to the optimal solution . For any , if any one of the elements of is extracted from , a new set is obtained. The sum value of is , and then .

Proof. Suppose that there exists and ; then , which indicates that a better solution results if is added to , which is corresponding to . This better solution contradicts the optimal solution .

4. Greedy Algorithm of Policy Decomposition

Since the model of policy decomposition is a discrete optimization model, there is not a specific algorithm that can apply to the problem directly. In what is to follow, we construct a greedy algorithm for solving the problem of policy decomposition effectively.

In practical applications, if a policy is decomposed on the basis of the subject attribute, there exist the following two facts.(i)The difference between values for each is not too much.(ii)The number of the elements embodied in is not truly very large.

In view of these two existing facts, the main idea of the proposed greedy algorithm is as follows.(i)The cost set Cost should be sorted in a descending order for obtaining a new set .(ii)If -partition is needed, the first elements of are distributed, respectively, to one of null sets.(iii)The remainder elements of are distributed successively to one of the sets, whose sum value of the elements present is the smallest.(iv)The algorithm ends when the cost set is empty.

The complete greedy algorithm of policy decomposition is shown in Algorithm 1, where the input is the cost set and (the number of subsets after policy decomposition) and the output is the subsets. Our demand is that the maximum sum value of these subsets be as small as possible. In step (1), heap sort is used to sort the elements in the cost set to obtain a new set . In steps (2)~(4), the first k elements in are distributed, respectively, to the subsets firstly. In step (5), A min-heap minHeap is then built on the basis of the subsets. In steps (6)~(9), the remainder elements of are distributed successively to one of the subsets, whose sum value is always kept the smallest in each distribution. Meanwhile, minHeap also needs to maintain the min-heap property in each distribution.

Input: Cost, k
Output: k subsets
(1) Cost′ = Heap_Sort(Cost)
(2) for the first k elements do
(3) distributed respectively to one of k subsets
(4) end for
(5) A min-heap minHeap with the first k elements is built
(6) for the remainder elements of Cost′  do
(7)    distributed successively to one of k elements in minHeap,
  whose sum value is always kept the smallest
(8)    minHeap is adjusted
(9) end for

The time complexity of the algorithm is analyzed as follows.(i)Heap sort takes time.(ii)The time that the first elements in are distributed, respectively, to array is .(iii)The time of building minHeap with elements is .(iv)The time that the remainder elements of are distributed successively to minHeap with minHeap adjusted is .

Based on the analyses above, the total time complexity of the greedy algorithm is .

An example is presented here. The greedy algorithm is used to decompose a cost set into three subsets, and the procedure of decomposition is shown in Figure 2 from (a) to (l). The decomposition result is that the three sum values of the subsets are , , and . This result is not the optimal solution , , and , but the difference between values for them is not truly very large.

In order to make further efforts to observe and compare the results of policy decomposition using the greedy algorithm, we randomly generate 20 cost sets with different numbers of elements, as shown in Table 2.

Without loss of generality, these 20 cost sets are decomposed, respectively, for 3, 4, and 5 partitions and their decomposition results are shown in Figure 3 from (a) to (c), which depicts the comparisons of the maximum sum values of the subsets obtained by decomposition with that of the theoretical optimal solutions. As shown in Figure 3, experimental results using the greedy algorithm for solving the problem of policy decomposition are close to the theoretical optimal solutions with a high degree of approximation.

Each of the experimental results using the greedy algorithm is unique and nonrandom. All the results cannot be guaranteed to be the optimal solutions, but the obtained approximate solutions ​​are extremely close to the theoretical optimal solutions or even equal to the theoretical optimal solutions sometimes. This fact can meet the needs of the practical applications.

5. Experimental Results

In order to assess the evaluation performance improvement of PDPs in XDPEE, the test policies and the generation of test requests are introduced firstly. We do the experiments as follows.(i)The optimum number of threads is determined.(ii)Comparisons of the evaluation performance of PDPs in XDPEE with that of PDPs in the Sun PDP are made. (Sun PDP [29] is a widely used policy decision point, which is adopted to evaluate requests as a decision engine in our experiments.)(iii)The evaluation time of XDPEE with different numbers of PDPs is measured.

5.1. Test Policies

In order to simulate practical application scenarios, we select policies from practical systems [3638]. Three adopted XACML access control policies in practical systems are as follows:(i)library management system (LMS): the LMS provides access control policies by which a public library can use web services to manage books;(ii)virtual meeting system (VMS): the VMS provides access control policies by which web conference services can be managed. The VMS allows users to organize online meetings in a distributed platform. When a user connects to the server, he/she can enter or exit a meeting, make a statement and ask questions at the meeting, and so forth. Every meeting has an administrator, whose responsibilities are initializing the meeting information and setting some parameters (such as the meeting’s title and organization). The administrator can also assign to every meeting a host, who is capable of selecting a user to make a statement;(iii)auction sale management system (ASMS): the ASMS provides access control policies by which items can be bought or sold online. A seller initializes the lowest price of and the description of an item which are allowed to be submitted when at auction. A user can participate in the bidding process by bidding the item. The restriction to a user is that there must be enough money in his/her account before bidding.

The policy of the LMS contains 720 rules, the VMS 945 rules, and the ASMS 1760 rules.

5.2. Generation of Test Requests

Martin and Xie [39] put forward that policies are analyzed by Change-Impact in order to automatically generate access requests that conform to Change-Impact. The purpose is to improve the coverage of test. The main idea is that conflicting policies or rules can be obtained by conflicting detection tools according to the fact that different policies or different rules in the same policy could make inconsistent results of evaluation for the same request and that correlative access requests can be constructed for testing according to the conflicting policies or rules.

Bertolino et al. [40] proposed that access requests can be automatically generated to test the correctness of PDP as well as the configured policies. They pointed out that the Context Shema which is defined by the XML Schema of XACML describes all the structures of the access requests that might be accepted by PDP or all the valid input requests. This paper indicates that their developed X-CREATE can generate possible structures of access requests according to the Context Shema of XACML. The policy analyzer obtains possible input values of every attribute from a policy. The policy manager adopts the method of random allocation to distribute the obtained input values into structures of access requests. Another test scheme is Simple Combinatorial, which can generate access requests according to all the possible combinations of attribute values of subject, action, and resource in XACML policies. This paper also analyzes the advantages of these two schemes in debugging.

According to the actual requirement of performance test, the method for combining Change-Impact, Context Shema, and Simple Combinatorial is adopted to simulate the practical access requests.

5.3. Performance Tests and Comparisons

In the light of actual requirement, the policy of the LMS, VMS, and ASMS needs to be expanded separately to be three bigger policies, which contain more rules. According to the Cartesian product of different subjects, actions, resources, and conditions in all rules of a policy, we conduct new rules and add them to the original policy. The number of rules in the policy of the LMS, VMS, and ASMS is expanded separately to 3000, 6000, and 9000. Finally, the policy of the LMS, VMS, and ASMS is decomposed separately into multiple subpolicies each with fewer rules. Policy decomposition guarantees that the cost of subpolicies deployed to each PDP is equal or approximately equal.

5.3.1. Determination of Optimum Number of Threads

The evaluation time of PDPs is related to the number of threads in the RDM. Accordingly, it is essential to determine the optimal number of threads in the experiment firstly. When the number of access requests is fixed (1000 access requests are generated randomly here), the variations of evaluation time of PDPs with number of threads are shown in Figures 4, 5, and 6. These figures describe the tests for XDPEE and the Sun PDP each with 4, 5, and 6 PDPs, respectively.

In Figures 4, 5, and 6, we observe that(i)the evaluation time of PDPs reduces with the growing numbers of threads,(ii)when the number of threads is greater than 10, the evaluation time of PDPs tends to be a constant value.

Therefore, the number of threads in the RDM is set to 10 in the following experiments.

5.3.2. Performance Comparisons of XDPEE with Sun PDP

In order to assess the evaluation performance improvement of PDPs by using the method of policy decomposition, performance comparisons of XDPEE with the Sun PDP each with 4, 5, and 6 PDPs are made. We generate 500, 1000,…, 6000 access requests randomly to measure the evaluation time of PDPs. For the policy of the LMS, VMS, and ASMS, the variations of evaluation time of PDPs with number of access requests are shown in Figures 7, 8, and 9.

In Figures 7, 8, and 9, we observe that(i)the evaluation time of PDPs increases when the number of access requests grows,(ii)the growth rate of the evaluation time of PDPs in XDPEE is less than that of PDPs in the Sun PDP,(iii)when the number of access requests reaches 6000, the improvement (percentage) of the evaluation performance of PDPs in XDPEE compared with that of PDPs in the Sun PDP is shown in Table 3.

5.3.3. Measurements of Evaluation Time of XDPEE with Different Numbers of PDPs

In order to further assess the evaluation performance improvement of multiple PDPs, the evaluation time of XDPEE with 1, 2, 4, and 6 PDPs is measured. We generate 500, 1000,…, 6000 access requests randomly to measure the evaluation time of PDPs. For the policy of the LMS, VMS, and ASMS, the evaluation time of XDPEE with different numbers of PDPs is shown in Figure 10.

In Figure 10, we observe that(i)the evaluation time of PDPs in XDPEE increases when the number of access requests grows,(ii)the evaluation time of PDPs in XDPEE reduces with the growing numbers of PDPs,(iii)when the number of access requests reaches 6000, the improvement (percentage) of evaluation time of 6, 4, and 2 PDPs compared separately with that of one single PDP is shown in Table 4.

6. Conclusions

When requests are evaluated, the evaluation performance improvement of PDP is affected by some factors, such as the number of rules embodied in a policy, the relative order of rules, and the number of access requests, etc. Based on these influencing factors and studies available, this paper proposes a distributed policy evaluation engine with several PDPs, where a policy decomposition module (PDM) and a request distribution module (RDM) are introduced. The pattern where there is only one single PDP in a centralized authorization model is changed. A policy should be decomposed into multiple subpolicies each with fewer rules so that the cost of subpolicies deployed to each PDP is equal or approximately equal. As a result, the average evaluation time of PDPs can be effectively shortened, so that the evaluation performance of PDPs can be improved substantially.

This paper also presents a discrete optimization model of policy decomposition. According to the properties of the optimization model, a greedy algorithm with a favorable time complexity is constructed for solving the model.

In experiments, the optimal number of threads in the RDM is determined. Comparisons of the evaluation performance of PDPs in XDPEE with that of PDPs in the Sun PDP are made. Also, the evaluation time of XDPEE with different numbers of PDPs is measured. For the policy of the LMS, VMS, and ASMS, experimental results show that(i)the evaluation time of PDPs in XDPEE is less than that of PDPs in the Sun PDP, no matter how great the number of access requests is,(ii)the more the numbers of PDPs are, the shorter the evaluation time of PDPs in XDPEE will be.

In our simulation of a considerable number of access requests, the evaluation performance of PDPs in XDPEE is improved effectively, which can meet the needs of authorization services preferably in SOA environment. Meanwhile, we have to focus attention on ensuring the correctness of authorization services. At present, the authorization is carried out on the basis of the policies configured by administrators. Future research will concentrate on how to adopt the security and risk assessment mechanism of authorization to improve its correctness.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work is supported by the State-funded project of the Higher Education and Fundamental Scientific Research in China.