Challenges and Opportunities of Network Virtualization over Wireless Mobile NetworksView this Special Issue
Research Article | Open Access
Distributed Secure Service Composition with Declassification in Mobile Clouds
The regional and dynamic characteristics of mobile clouds pose a great challenge on information flow security during service composition. Although secure verification approaches based on standard noninterference provide a solid assurance on information flow security of composite service, too strict constraints on service components may cause the failure of composition procedure. In order to ensure the availability of composite service, we specify the declassification policies based on cryptographic operations to allow data to be legally declassified. And we propose the improved distributed secure service composition framework and approach, which can realize different cloud platforms in multiple domains, cooperate with each other to complete the declassification, and secure composition procedure. Through the experiment and evaluation, it is indicated that our approach provides a more reliable and efficient way for secure service composition in mobile clouds.
Mobile devices (e.g., smartphone and tablet PC) are increasingly becoming more and more popular in human life as their portability, pervasive connectivity, and various applications (e.g., iPhone and Android Apps). Particularly, in recent years, more kinds of basic functions (e.g., computation, storage, and network) are offered by cloud computing as the software services for elastic management and rapid service delivery with low cost, such as SDS (Software Defined Storage) , SDN (Software Defined Network) , and cloud- based mobile Apps. With the explosion of mobile applications and the support of cloud computing, mobile computing based on clouds provides a new and promising paradigm for delivering IT services more effectively and conveniently . Moreover, services provided by different clouds and mobile terminals can be composed together to form a more powerful applications [4, 5], for example, trip mode selection application composed by Positioning service, Walking Speed service, Bus Tracking service, and Arrival Estimate service .
However, because of the regional and heterogeneous characteristic of mobile networks, there are multiple clouds deployed in different network domains. Due to the multidomain feature of the mobile clouds, data located in different mobile terminals and domains may have different security levels, which poses a great challenge on the security of service composition across multiple mobile clouds. For instance, the personal medical records in e-health data center are with high security level, while the position of the ambulance is with lower security level. When these services are composed together for the patient’s emergency, data with different security levels are transmitted among these services, respectively. If these services are composed in an insecure way, an operation in a service may transmit confidential data to a public object and cause the information leakage. Access control has been widely used for protecting sensitive information of individual service from being released to unauthorized attackers . However, for a composite service in mobile networks, data may be processed by serval services from multiple clouds dynamically. Access control cannot detect the information leakage caused by the subsequent operations in other services. Therefore, information flow security is one of the major concerns about the service composition in mobile clouds.
In order to enforce the data security during the service composition, various security mechanisms have been proposed to validate the information flow in composite service based on type system, Petri nets, model checking, program static analysis, and real-time monitoring. By using type system , Hutter and Volkamer  define a set of information flow security rules that check the service composition in a secure way during the compilation of the workflow code. Petri nets provide a formal way to model composite service and Accorsi and Wonnemann  can identify leaks by analyzing it. Model checking is an automatic verification way that can be used to detect information leaks . Nakajima  embedded the lattice model into the Business Process Execution Language (BPEL), and verified the absence of invalid information flows based on model checking. Program analysis is used to construct the dependence among different inputs or outputs; then information flow control (IFC) policies can be designed according to the security requirements. There are two ways to analyze the software according to the different objects, that is, static analysis for source code and dynamic analysis for executable program. For static analysis, She et al. [13, 14] define the transformation factor to measure how likely the output would depend on the input data in different candidate services. In order to improve the accuracy of static analysis, PDG (Program Dependence Graph) is used to specify the dependence between the objects in composite service [15, 16]. Compared with static analysis, dynamic analysis is built on the real-time monitoring of executing program, which can provide more accurate way to check the illegal information flow during the running time . But real-time monitoring increases the cost of service execution, which may decrease the QoS and interfere with users’ experience, especially when dozens of services are composed together.
Based on the above approaches, many schemes for secure service composition among clouds are proposed to address the issues of the information leakage on cloud services. Bacon et al.  review a range of IFC models and implementations to identify opportunities for using IFC within a cloud computing context, including type system, static analysis, and runtime dynamic analysis. Chou  presents the CloudIFC (Cloud Information Flow Control) model to strictly control output information flows in cloud services. Based on the specific information flow control rules and the variables dependency obtained by static analysis, they propose a novel checking way by MapReduce to decrease the verification cost. Solanki et al.  develop a new access and information flow control paradigm for service based systems, namely, WS-AIFC, to secure the information flow among services. Based on the dependence list for each data object, WS-AIFC supports flexible cross-domain access and information flow validation. Considering multiple domain nature of clouds, we  propose a distributed information flow security verification framework and approach to provide a better load balance and reduce the verification cost effectively across multiple clouds.
Although the above approaches provide a solid assurance on information flow security of composite service, implementing these IFC policies in real applications is still a challenge. These policies aim at standard noninterference that characterizes the complete absence of any information flow or any causal flow from high-level entities to low level ones. However, this requirement is too strict that few services can satisfy it in real application. If all the candidate services fail in the verification, there is no available execution path, which causes the failure of the whole composite service. Meanwhile, in mobile clouds, services are bound together in a dynamic way during service composition, which means the security sensitivities of the input and output data may change when mobile terminal move into a new domain. Considering dozens of candidate services with similar service function, it will be a complex work on selecting appropriate components to compose users required application by type system, global model checking, or centralized static analysis. For type system, when user’s initial inputs change, the service codes need to be rebuilt, which brings extra cost for the secure service composition. For global model checking and centralized static analysis, it is impractical to employ a centralized entity in multiple clouds to verify the information flow security. Moreover, the cost of verification can increase rapidly when the application involves more components and the number of the candidate services increases. First, the same service component has to be reverified in different composite services. Second, the state explosion problem arises if each service component is complicated.
Therefore, a distributed and efficient information flow control mechanism supporting declassifying or downgrading information is needed for the secure and reliable service composition in mobile clouds. Compared to the paper , we provide the following new extensions. Firstly, mobile cloud is a more complex scenario, which involves the cooperation of different cloud platforms in multiple domains during the composition, and we add more related works for a clear description. Secondly, we give more specific definitions on declassification operations and design an improved formal information flow security model supporting declassification. Thirdly, considering the limited energy and computing resource of mobile terminal, we improve the distributed secure service composition framework and algorithms for the involvement of cloud platforms, which can take over some load on service verification. Besides, more experiments and evaluations are executed for a deep analysis on our approach.
The rest of the paper is structured as follows. Section 2 gives a formal definition of the service chain model in mobile clouds. Section 3 presents the improved computation rules with declassifying information flow in service chain. In Section 4, we propose the secure service composition with declassification mechanism for service chain in mobile clouds. Section 5 evaluates the proposed approach. Section 6 concludes the paper.
As shown in Figure 1, mobile cloud MC is a large-scale distributed environment which consists of multiple heterogeneous domains; that is, . Domain d has various types of data resources R. And services provided by mobile terminals MT or cloud platforms can be CP composed into a more powerful application according to the different customer’s requirement. For a clear description, each service provided by either terminals or clouds can be uniformly regarded as a service node in the domain; that is, . There is also a security authority DSA in each domain for the management on security policies expressed by domain certificate DCe. Due to the limited energy and computation resources of mobile terminal, there is a cloud platform CP for processing more complex tasks. So domain d can be represented as .
Referring to the definition in , each service provided by service node SN can be represented as a tuple , where is the domain belongs to; is the input set of service; is the output set of service; is the service function. is the service certificate which specifies the security properties. For each service , there is , where is the set of all inputs that receives from its predecessor ; is all the inputs from the domain resources ; is all inputs from service node itself. In the same way, there is , where is the set of all outputs that sends to its successor . is all outputs updated to the domain resources . is all outputs written to its local storage.
Service chain SC is a simplified composite service with sequence structure, which can be represented as . CH is the execution chain of services . In CH, each service only has one predecessor and one successor . For a clear description, and are used to denote the initial user. and are the inputs and outputs of SC including all the service components; that is, ,
Due to the complex operations in service chain and dynamic network environment, the inner-service dependency and interservice dependency are defined to represent the flows between different inputs and outputs based on Program Dependence Graph (PDG) .
3. Secure Information Flow Model with Declassification in Service Chain
3.1. Multilevel Security Model
In order to represent different sensitivities of data resources in mobile clouds, multilevel security model is defined as , where is a finite set of security levels that is totally ordered by .
For each input or output object in , we define : maps to the required security level of data stored in it, while : maps to its clearance level which represents can access the corresponding-level data. The required security levels will be computed according to the dependence of the input and output data, which is described as computation rules in the following sections. The clearance levels are provided by the objects who want to access the data, which can be specified in service certificates.
3.2. Secure Information Flow with Standard Noninterference
For data with different security requirements, the computation rules (CRs) on required security level are defined in  as follows:
CR3. , , , and .
Based on the standard noninterference, we propose a strong security definition on information flow for composite service in .
Definition 1. The information flow in service chain is considered secure if , satisfies , where and
In this definition, it is considered secure when there is no flow from a high-level object to another low level one across all service components. However, the strong security constraints enforce the fact that the flow of information must comply with the security level ordering and do not tolerate any exceptions. To deal with real application, with the execution of the composite service, the required security levels of inputs or outputs become higher and higher according to the above CRs, which is so strict that fewer candidate service components can satisfy. In this case, it would lead to a high failure rate on service composition. Therefore, more general flow policy allowing data declassification needs to be proposed to improve the availability of composite service.
3.3. Secure Service Composition Model with Cryptographic Operations
Due to the strong security condition, declassification operations are needed for the secure service composition. Cryptographic operations are promising ways of maintaining data confidentiality and integrity, for example, encryption and digital signature. Through the cryptographic operations, processed secret data can be transmitted into a public object, which realizes the declassification of data. Therefore, extra cryptographic operations and can be add to the service function for each service.
For , and represent the plaintext and ciphertext of , and the encryption and decryption operation on are defined as and . Because of the low efficiency on homomorphic encryption , the traditional cryptographic operations are considered in this paper. As shown in ’s definition, the classified data cannot be directly processed by regular operations which may cause the plaintext of to not be recovered. But the basic input, output, encryption, and decryption are still supported by for classified data .
When the data in is encrypted, it provides more secure way to transmit , and the attacker needs to work harder to crack the ciphertext which depends on the security of encryption algorithm and the key . Thus we use to represent the security level of classified data . Encryption with more complex algorithm and key means is lower. And the security level of with reencryption depends on the strongest algorithm and key. When is decrypted, the data of is no longer protected by encryption, and the security level of returns to its original value. According to the analysis above, we can extend the basic computation rules as follows:
CR4. , if is encrypted by , there is
CR5. For the ciphertext , if is decrypted, there is
In traditional definition on standard noninterference, high security level data are not allowed to transfer to an object with lower level. The encryption operation may violate the requirements on standard noninterference. But the attacker still cannot obtain the sensitive data if he cannot crack the ciphertext, which is still considered secure although the sensitive data is transferred to an object with lower clearance. In order to specify the special downgrading flow in composite service, an extended definition on inner dependence is proposed as follows.
Definition 2. For represents the set of inputs that depends on, where is the pair of encryption algorithm and key that adopts; is the pair of decryption algorithm and key that dependent inputs adopt. Then and , there are four cases to consider:(1) is plaintext and outputs as the plaintext; there is .(2) is plaintext but outputs as the ciphertext encrypted by ; there is .(3) is ciphertext but outputs as the plaintext; it means is decrypted with during the execution of service. Then there is .(4) is ciphertext and also outputs as the ciphertext; there are three different cases:(1)If is decrypted with during the execution of service, it means is operated as plaintext and is encrypted by another encryption algorithm and key. Then there is .(2)If is not decrypted but is reencrypted by , we can obtain .(3)If is not decrypted and is not reencrypted, there is .Based on the extend inner dependence, interdependence can be defined recursively as follows.
Definition 3. represents the set of inputs or outputs in different services that depends on. is the set of pairs of the encryption algorithm and key that is used during the execution path, while represents the set of all decryption operations. For each , there are three cases to consider:(1): and , if , there is (2): and , if , , and , there is , where and De(3): , if and , there is , where and (4): and , if , , , and , there is , where and (5): and , if , , and , there is , where and Based on the extend inner and interdependence, the improved security definition on information flow for composite service can be presented as follows.
Definition 4. The information flow in service chain is considered secure if , satisfies the following conditions:(1), and ,(i)if , there is ;(ii)if , there is , where (2), , , and ,(i)if , there is ;(ii)if , there is , where According to Definition 4, two different types of flow are considered separately, that is, unclassified and classified flow. For the unclassified flow, it must satisfy the traditional information noninterference constraints, that is, the clearance on each input or output in must be no less than the required security level, which depends on all related inputs and outputs in and its predecessor. For the classified flow, data security depends on the encryption operation, so it can be considered secure that the clearance of the input or output is equal or greater than the required security level of the strongest encryption operation.
Based on improved information flow security definition, we can deduce the security constraints on each service as the following theorem.
Theorem 1. The information flow in service chain with steps is considered secure if each in satisfies the following conditions:(1), and ,(a)if is not encrypted, there is ;(b)if is encrypted by , there is .(2), and ,(a)if is not encrypted, there is ;(b)if is encrypted by , there is .
Proof. First, let ; then there are two service components involved in the service chain, that is, and .
Case 1. Inner information flow in each service component is considered first; that is, , , and .(1)Condition (1)(a) provides that for each where , there is .(2)Condition (1)(b) provides that for each where , there is .
In the same way, we can get the information flow is also secure in .
Case 2. Information flow between and is considered; that is, , , and .(1), , and , according to Definition 3(1), there is where and , and condition (2) provides .(2), , and , according to Definition 3(2), there is , , and .(i)If satisfies , is not encrypted. Condition (1)(a) provides , and condition (2)(a) provides . Therefore, .(ii)If satisfies , is encrypted by . There is . Condition (1)(b) provides . Condition (2)(a) and CR 4 provide .(3) and , according to Definition 3(3), there is and .(i)If satisfies , is not encrypted. Condition (1)(a) provides , and condition (2)(a) a provides . Therefore, .(ii)If satisfies , is encrypted by . There is . Condition (1)(b) provides .(4), and , according to Definition 3(4), there is , , , , and , and there is where and .(i)If satisfies , there are two different cases:(a)For and , CR 3 provides . Condition (1)(a) provides and condition (2)(a) provides . Therefore, .(b)For , , , and , CR 5 provides and condition (1)(a) provides , so .(ii)If satisfies , there are four different cases:(a)For , , , and where , condition (2)(b) provides .(b)For , , , and where , condition (2)(b) provides .(c)For , , , and where , condition (2)(b) provides .(d)For , , , and where , condition (2)(b) provides .
Based on the above analysis and Definition 4, information flow between and is secure.
Therefore, Theorem 1 is true when .
Then we assume Theorem 1 is true when , and the proof on is presented as follows.
Case 1. Inner information flow in service component is considered; that is, , and .(1)Condition (1)(a) provides that for each where , there is .(2)Condition (1)(b) provides that for each where , there is .
And above assumption provides that information flow in is secure.
Case 2. The assumption provides that information flow among first service step is secure. Then the interinformation flows between and former services are considered; that is, , , and , .
According to Definition 3(5) and Lemma 1 in , there is , , , , and .(1)For there is .(i)If satisfies , there is for , and where . Condition (2)(a) provides and the assumption provides . So there is .(ii)If satisfies , there is for and where . Condition (2)(b) provides . So there is .(2)For , the following cases are considered:(i)If satisfies , there are two cases:(a)For , and where and , condition (1)(a) provides . CR 2 provides . The assumption provides . So there is .(b)For , , and where and , CR 5 provides and condition (1)(a) provides , so there is .(ii)If satisfies , there are five cases:(a)For , , and where and , condition (1)(b) provides . So there is .(b)For , , and where and , there is . Condition (1)(b) provides . So there is .(c)For , , and where and but , there is . Condition (1)(b) provides . So there is .(d)For , , and where and but , there is . Condition (1)(b) provides . So there is .(e)For , , and where and , there is . Condition (1)(b) provides .
Based on the above analysis and Definition 4, information flows between and former services where are secure.
Therefore, Theorem 1 is also true when .
In conclusion, Theorem 1 is true.
Based on the above Theorem 1, we can propose an improved service composition mechanism supporting declassification operations. The specific declassification policies (DPs) are presented as follows.
DP 1. For , , , if , then needs to be encrypted by which satisfies .
DP 2. For , , , if , then needs to be encrypted by which satisfies .
According to the declassification policies, when the provided security level of cannot satisfy the strict conditions, cryptographic operations are adopted to assist in declassifying the required security level which can also hold the information flow security.
4. Secure Service Composition with Declassification in Mobile Network
4.1. Secure Service Composition with Declassification Framework in Mobile Network
In the mobile cloud system with multiple domains, there are serval candidate services with same functions but different providers, which can be denoted by , , . Traditional secure service composition approaches are based on standard information flow verification technique where insecure candidate service is filtered. However, it may be so strict that few candidate services can satisfy in real application, which leads to the failure of service composition. Based on the declassification policies, we can propose an improved secure service composition framework supporting declassification operations, which is shown in Figure 2.
This framework is constructed as a distributed secure service composition framework involving three main kinds of entities, that is, Cloud Platform (CP), Candidate Services (CS), and Domain Security Authorities (DSA). Considering the limited energy and computation resources of mobile terminals, the verification procedure is executed by CPs. DSAs are responsible for the management on the security certificate SCe for each service node. SCe includes the provided security levels of input and outputs, the dependencies between the input and output and its public key. If the service node is fixed one, that is, services are provided by cloud platform, the certificate is generated when the service is first deployed in cloud platform. If the service node is mobile one, that is, services are provided by mobile terminal, the certificate is generated when the terminal first moves into this domain.
During the verification, all candidate services send their dynamic input data and certificates to the cloud platform to finish the verification procedure. There are two different scenarios, that is, inner-domain and interdomain verification. For inner-domain verification, candidate services CP and DSA in the same domain are involved in the verification. For interdomain verification, the participant entities include not only candidate services but also two CPs and SAs in the corresponding domains.
Comparing to the traditional verification procedure in , declassification based on cryptographic operations is executed automatically to recover the insecure information flows against the declassification policies. If the information flow security verification returns failure, each insecure component needs to negotiate a session key with its adjacent nodes for the encryption and decryption during the service execution. For clear description, we mainly focus on the declassification procedure in this paper.
4.2. Cryptographic Operations for Declassification in Service Composition
4.2.1. Cryptographic Operation Agent
Based on the Theorem 1, basic cryptographic operations must be supported by each service node to realize the declassification of information flow during the service composition. There are many relevant security specifications which have been proposed to protect data confidentiality and integrity during service execution, such as XML Encryption and Signature, WS-Security, SAML (Security Assertion Markup Language), XACML (XML Access Control Markup Language), and XKM- S (XML Key Management Specification) . By developing the basic security functions supported by these specifications, a cryptographic operation agent (COA) can be designed and deployed in each service node, mobile terminal, or cloud platform, to execute the declassification operations, which is shown in Figure 3.
The cryptographic operation agent is composed of three function modules, that is, key negotiator, encryptor and decryptor. Key negotiator is responsible for the key management including key generation, negotiation with other services, key storage, and update. Encryptor and decryptor are responsible for data encryption and decryption during the service. There are two phases for agent to complete the declassification procedure, that is, key negotiation and data encryption and decryption.
4.2.2. Key Negotiation Phase
Key negotiation phase is the preparation phase for the data declassification, which is also the most critical step. In this phase, for each insecure input or output , related two services negotiate for generating the appropriate encryption algorithm and key to ensure the information flow security according to DPs. There are two kinds of negotiation process due to multiple domains, that is, inner-domain negotiation and interdomain negotiation, which is shown in Figures 4 and 5. The procedure of key negotiation follows the specification of XKMS (XML Key Management Specification).
When the key negotiation begins between two adjacent service nodes, both certificates containing their own public keys are delivered to the opponents. Then the random number protected by public key is transferred to each other at the fourth and seventh step. And finally the session key is computed based on these random numbers with a standard key generation algorithm. Meanwhile the encryption algorithm can also be negotiated during this procedure. In order to ensure the information flow security in the following composition, the length of the key, the complexity of the random number, the key generation algorithm, and the encryption algorithm must satisfy the requirements on security level. The pseudocode of key negotiation is presented as Algorithm 1.
4.2.3. Data Declassification Phase
The data declassification phase is activated after the procedure of secure service composition. During the service execution, the COA encrypts the insecure inputs and outputs to realize the declassification on high-level data by using the session key. Meanwhile, it also realizes decryption on the cipher data for normal processing of service function.
4.3. Distributed Secure Service Composition with Declassification Algorithm across Multiple Mobile Clouds
During the secure service composition, cloud platform verifies the service chain by service step based on Theorem 1. For each candidate service , first verifies whether the input objects satisfy the security condition, then compute the required security level for each output objects, and finally verify whether the output objects satisfy the security condition. Meanwhile, if there is an input or output object of which fails to satisfy the strict security constraints, the key negotiation is executed automatically between the related services. In this case, the procedure also returns true unless key negotiation is failed. The pseudocode of verification and declassification for adjacent services is presented as Algorithm 2.
Based on the verification and declassification procedure, we propose a distributed secure service composition with declassification algorithm for mobile clouds. The composition procedure is executed in a distributed way, that is, different cloud platforms in multiple domains need cooperation with each other to finish the whole procedure. There are three types of messages defined for the control on the execution of the procedure, that is, start_message, failure_message, and success_message. First, each cloud platform waits for the start message to start the composition procedure. Then CP receives the intermediate result of composition from the start message, including the required security level of predecessor’s output and all executable path. After that, CP generates all possible execution paths based on intermediate result and the candidate services located in its domain and verifies them. For each path p that passed the verification, CP pushes it into passed path set PP and records its required security levels of outputs, which can be grouped as an intermediate result for the next step composition. If there is no legal path, CP would send the failure message to user to announce the failure of composition. If the final service step located in this domain, CP would send the success message with all passed path to user. If there are other steps in different domain, CP would send the start message with the intermediate result to the next cloud platform to continue the verification procedure. The distributed secure service composition with declassification algorithm is shown as Algorithm 3.