- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
ISRN Artificial Intelligence
Volume 2013 (2013), Article ID 482949, 18 pages
Comparison of Adaptive Information Security Approaches
VTT Technical Research Centre of Finland, Kaitoväylä 1, 90571 Oulu, Finland
Received 27 May 2013; Accepted 24 August 2013
Academic Editors: P. Kokol, Y. Liu, and Z. Liu
Copyright © 2013 Antti Evesti and Eila Ovaska. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Dynamically changing environments and threat landscapes require adaptive information security. Adaptive information security makes it possible to change and modify security mechanisms at runtime. Hence, all security decisions are not enforced at design-time. This paper builds a framework to compare security adaptation approaches. The framework contains three viewpoints, that is, adaptation, security, and lifecycle. Furthermore, the paper describes five security adaptation approaches and compares them by means of the framework. The comparison reveals that the existing security adaptation approaches widely cover the information gathering. However, the compared approaches do not describe how to decide a method to perform a security adaptation. Similarly, means how to provide input knowledge for the security adaptation is not covered. Hence, these research areas have to be covered in the future. The achieved results are applicable for software developers when selecting a security adaptation approach and for researchers when considering future research items.
Heterogeneous and dynamic environments create a need that software products and systems are able to manage their behaviour and functionality. The software system faces several situations during its lifecycle, and all of them cannot be dealt with at design-time. Moreover, various systems are so complex that setup, running, and updating are unmanageable tasks even for the professional users [1, 2]. In  Dobson et al. say that emphasis is moving from systems developed based on a priori sets requirements towards platforms that adapt themselves based on changing demands. Some years ago, the vision of autonomic computing was presented . In the vision, the self-management is defined to be the essence of autonomic computing. In this paper, however, autonomic computing, self-managing, and self-adaptive terms are interchangeable. The similar decision is also made in previous surveys presented by Huebscher and McCann  and Salehie and Tahvildari .
The similar challenges, as mentioned above, can be also seen from the information security point of view, shortly called security. Evolving environments create new threats and static security mechanisms are not applicable in all situations. This development sets the requirement of self-adaptive software and adaptation of security mechanisms. The purpose of security adaptation is to ensure an appropriate security level in different situations. The self-adaptive software is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation . This is achieved by means of sensors and executors. The sensors monitor the environment in order to reveal changes, and the executors perform the adaptation. From the security viewpoint, the monitored changes relate to environment or usage changes that affect the required or achieved security level. The adaptation executors concentrate on security mechanisms and policies.
Currently, several security adaptation approaches exist. On one hand, approaches concentrate on adapting a particular security mechanism or supporting a specific security attribute. On the other hand, some approaches are generic; that is, they support different attributes and mechanisms. Hence, it is difficult to select the most suitable adaptation approach for different usages. Moreover, it is difficult to know what research steps are needed in the future. Elkhodary and Whittle surveyed four approaches for adaptive security in . The surveyed approaches were Extensible Security Infrastructure , Strata Security API , The Willow Architecture , and The Adaptive Trust Negotiation Framework . Elkhodary and Whittle performed their survey in 2007 , and hence, new approaches have already appeared. Moreover, authors compare the type of adaptation achieved with different adaptation approaches. However, in this comparison the comparison addresses how the adaptation is performed.
The purpose of this comparison is to describe and compare existing security adaptation approaches presented in the literature. Consequently, the comparison framework is introduced. The framework combines three viewpoints: security, lifecycle, and adaptation. Hence, existing information security taxonomies and the surveys of general adaptation approaches have provided input for the framework definition. We have selected five security adaptation approaches for the comparison, namely, (i) an architectural approach for self-managing security services , (ii) a software framework for autonomic security in pervasive environments , (iii) context sensitive adaptive authentication , (iv) adaptive messaging middleware , and (v) adaptive security architecture based on EC-MQV algorithm in pervasive networks . These five approaches are selected based on the following three requirements. Firstly, the selected approach has to combine adaptation and security aspects. Hence, approaches, which concentrate on adaptation without considering security, are left out of this paper. Secondly, at least the initial implementation of the selected approach has to be realised. Even though the approach is not validated, some kind of implementation is assumed to exist. Thirdly, all selected approaches are published after the survey performed by Elkhodary and Whittle , that is, in 2007 or after. The first and second requirements cut down eight approaches as total. Two of these approaches were presented in journal papers and six in conference and workshop papers. These eight approaches focused mostly on the adaptation aspect, and thus information security was not in the focal point. Naturally, it is possible to develop those approaches towards security adaptation. However, now the focus is in approaches, which already contain security aspects.
After the introduction background information is given, the comparison framework is built in Section 3. Section 4 describes the selected adaptation approaches, and Section 5 makes the comparison by means of the comparison framework and discusses the future research needs. Finally, conclusions close the paper.
2. Background for Security Adaptation
In this section the focus of the comparison is defined, and required background information is given. Figure 1 shows three essential knowledge areas required to achieve adaptive security. (1) The autonomic computing area has to be known, in order to utilise appropriate patterns, decision making algorithms, and so forth. (2) Knowledge from the security area is needed to select adaptive security mechanisms and sensing relevant information. (3) Experience from the software development is needed in order to combine security and autonomic computing areas. To complicate the wholeness, these three areas interact with each other, illustrated with arrows in Figure 1. Firstly, the software development has to take into account requirements and constraints both from the security and autonomic computing areas. Similarly, the software development area declares which kind of security and adaptation features the final product contains. Secondly, autonomic computing sets requirements for the selected security mechanisms and their dynamism. Thirdly, security area sets requirements for decision making and sensing mechanisms used in autonomic computing.
All of these three areas form their own research field with their own terminology and research focuses, and thus it is not possible to cover all areas in one paper. Hence, in this paper the focus area is set as depicted with the dashed line circle in Figure 1. The compared security adaptation approaches are investigated from the autonomic computing viewpoint. In other words, how adaptability is achieved and what security attributes are to be adapted. The software development viewpoint is investigated from the evolution viewpoint; that is, how easily the adaptation approach can be utilised when new software is developed. Therefore, several aspects of the wholeness are enforced to exclude from the comparison, as depicted in Figure 1 outside the dash circle. Next, background information and related terminology for each of these three areas will be described.
2.1. Autonomic Computing
In the vision of autonomic computing—presented by IBM in —the autonomic behaviour is achieved by means of the MAPE-K reference model (Monitor, Analyse, Plan, Execute, and Knowledge). The phases of the MAPE-K model constitute the adaptation loop, depicted in Figure 2. The similar structure is also followed in many adaptation approaches. The previous surveys, that is, Dobson et al. , Huebscher and McCann , and Salehie and Tahvildari , utilise the loop structure with similar four phases as a reference model. Hence, the MAPE-K model acts as the reference model also in this comparison to present self-adaptive software.
The purpose of the Monitor phase is to collect information from the managed element, that is, the adapted software, and execution environment. The monitoring utilises sensors, either hardware or software, to collect relevant data. The Analyse phase combines the collected data and possible history data to reveal if requirements are not fulfilled, which causes an adaptation need. Consequently, the Analyse phase calls the Plan phase, which creates the adaptation plan. The adaptation plan contains a decision on how the software will be adapted. In order to create the adaptation plan, different algorithms or rules are utilised. Moreover, the Plan phase takes possible contradicting requirements into account as trade-offs. Finally, the Execute phase enforces the adaptation plan by means of effectors, which affect the managed element.
The existing surveys of autonomic computing adopt this loop structure in some level and explain these four phases of the loop. Nevertheless, the Knowledge part is not clearly defined—even, authors of the MAPE-K model do not describe the Knowledge part. Huebscher and McCann discuss the Knowledge part from the planning point of view in . Authors state that the division between planning and knowledge is not clear. Similarly, the boundary between knowledge and Monitor, Analyse, and Execute phases is vacillating. However, it is clear that right knowledge is required in each of these phases. The Monitor phase needs knowledge to observe right attributes from the managed element and environment. On the contrary, the Analyse phase requires knowledge how to combine the monitored data in a meaningful way. The Plan phase has to know the right planning algorithm and means to perform trade-offs. Finally, the Execute phase requires knowledge how the particular adaptation plan can be enforced in the current implementation and environment. Naturally, all of this knowledge exists in each adaptation approach realisation. However, many approaches integrate knowledge inside the adaptation loop, and, consequently, a separated knowledge part is not presented. In other words, knowledge required in each phase is coded inside that phase.
On one hand, ISO/IEC defines security in  as follows: “The capability of the software product to protect information and data so that unauthorised persons or systems cannot read or modify them and authorised persons or systems are not denied access to them.” On the other hand, some sources [16, 17] define security as a composition of confidentiality, integrity, and availability, which are called security attributes. Based on the security definition from ISO/IEC above, it is clear that authentication and authorisation are also essential security attributes. In some sources terms security goal, objective, and property are also used. Moreover, other security attributes may also appear in the literature, for example, privacy, nonrepudiation, immunity, and so forth. However, this paper uses the term security attribute, containing confidentiality, integrity, availability, authentication, and authorisation. This attribute set is sufficient to cover the existing security adaptation approach.
Risk management is a process to identify risks, assess risks, and take steps to reduce risks to an acceptable level . Herzog et al.  build an information security terminology and present it in an ontology form by utilising risk management terms, that is, asset, threat, vulnerability, and countermeasure. Common criteria  define an asset as an entity that someone presumably places value upon. Hence, asset can be almost anything that needs protection. National Institute of Standards and Technology (NIST)  defines that a threat is a possibility that vulnerability is exercised. The vulnerability is defined as a flaw or weakness in software that could be exercised (accidentally or intentionally) to cause a security breach . Finally, countermeasures refer to means to mitigate risks caused by threats. However, in this paper the term security mechanism is used instead of countermeasure. Figure 3 illustrates security concepts described above and their relationships.
Chess et al. discuss security and privacy challenges in autonomic computing environments in . Authors mention that the security of autonomic systems is not an entirely new kind of security. Nevertheless, system has to be secure in each configuration where it adapts itself. From the security point of view, autonomous systems support self-protection and/or self-healing. Chess et al.  characterize these as system capabilities to determine intrusions, eliminate the intrusion, and finally, restore the system to an uncompromised state. This categorization concentrates on reactive security approaches. In other words, it assumes that the system is already compromised. However, it is also possible to act proactively and perform self-protecting activities before the system is compromised. This means that system performs self-configuring and self-optimization for security mechanisms and architectures in order to achieve reasonable self-protection for various anticipated/predicted situations. It is notable that Chess et al.  concentrate security in autonomic computing environments, that is, the autonomic element performs functional or quality adaptation and achieving security also in a new state is important. The emphasise of this paper is on autonomic security, which naturally contains similarities with security in autonomic computing. However, by autonomic security we refer to adaptation approaches, whose primary purpose is to manage security.
2.3. Software Development
From the software development viewpoint, adaptation is a variability form where variation occurs at runtime [21, 22]. In this paper, the software development of security adaptation approaches is studied from the architecture viewpoint—as depicted in Figure 1. Hence, different variation techniques and methods to develop software product families are not covered. Bass et al.  define software architecture as follows: “The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements and the relationships among them.” In this paper, the purpose is to reveal the used architecture of each security adaptation approach. Hence, the architecture shows how the elements of adaptation approach are structured and related to each other. Furthermore, the architecture dictates how easily the particular security adaptation approach can be applied.
From the software development viewpoint, the MAPE model is a pattern to achieve the adaptation. Hence, the pattern is a reusable and generic model to achieve the particular solution. Similarly, dozens of patterns to achieve security in the software development exist. Security patterns are surveyed, for instance, in . Figure 4 presents a pattern to achieve basic type authorisation . The subject presents an entity that tries to access the object entity and these two entities are connected with the access right, for example, read or write.
In  Matinlassi and Niemelä categorise software quality to execution and evolution qualities. Execution quality refers to quality attributes, which are visible at runtime, for example, performance and usability. On the contrary, evolution quality refers to quality attributes faced during software development and maintenance. This paper focuses on evolution qualities. Matinlassi and Niemelä list the following evolution qualities: maintainability, flexibility, modifiability, extensibility, portability, reusability, integrability, and testability; in this categorisation, maintainability acts as the parent quality for other evolution qualities. In the context of this paper, maintainability describes how easily the security adaptation approach can be updated or modified to a new environment, for security adaptation maintainability is an important property. Firstly, new vulnerabilities might be found. Moreover, new situations for software usage can appear and the existing adaptation mechanisms are not able to achieve a secure state in those situations without updates.
2.4. Related Work
As already referred to, few surveys from the autonomic computing field already exist. Moreover, as mentioned in the introduction, Elkhodary and Whittle  surveyed four approaches for adaptive security. The survey concentrates on evaluating systems that adapt application-level security mechanisms. On one hand, the evaluation concentrates on three security services, namely authentication, authorization, and tolerance. Each security service is thought to serve security attributes as follows: authentication supports identification and nonrepudiation, authorization for confidentiality and integrity, and, finally, tolerance for availability. Based on this security attribute categorization, authors were able to perform comparisons with three security services mentioned above. On the other hand, the adaptation viewpoint evaluates the achieved adaptation level. The adaptation dimensions for the security services were absent, fixed, and adaptive. Furthermore, authors utilise an evaluation scheme that contains the following aspects (content of each aspect is presented in parentheses): (i) computational paradigm (parameterization, component based, reflection, or aspect-orientation), (ii) reconfiguration scale (single unit, interunit, or architecture wide), and (iii) Conflict handling (user driven, autonomous, or interactive). The final conclusion of the survey notices that none of the presented approaches supports all security services. In addition, the authors summarise that maintainability and reusability seem challenging. In order to avoid overlapping, this paper does not reevaluate security adaptation approaches, which are already discussed by Elkhodary and Whittle in .
3. Comparison Framework
In this section a comparison framework for security adaptation approaches is defined. The framework is built around three viewpoints: (i) adaptation viewpoint, (ii) security viewpoint, and (iii) lifecycle viewpoint, following the areas presented in Section 2 and Figure 1. The adaptation viewpoint consists of generic adaptation related properties. Consequently, these properties are common for all adaptive software. The content for the adaptation viewpoint is collected from the existing adaptation surveys and landscape papers, which are mentioned above [2–4]. The security viewpoint contains properties, which relate to software security. These properties are a subset from the existing security taxonomies. Finally, the lifecycle viewpoint concentrates on the aspects of software development. Figure 5 depicts the framework and content of its three viewpoints. Next, the properties in the framework are described in more detail.
Adaptation Viewpoint. The purpose of this category is to collect adaptation features. Six properties are included as follows.(i)Object to adapt describes what part of the software will be adapted. In the security adaptation approaches this usually refers to security mechanisms and their parameters in different layers and the behaviour of these mechanisms. although it is possible to adapt other parts, contrast to security mechanisms, in order to affect software security. On one hand, it is possible to change a communication protocol, which directly influences the achieved security. On the other hand, adapting resource utilisation indirectly affects security. These indirect objects play an important role of security as the wholeness. However, most security adaptation approaches have selected direct security mechanisms for objects to adapt.(ii)Adaptation timing describes when the adaptation is intended to occur. A start-up time adaptation means that a configuration and parameters are bound when the software is started. In contrast, the runtime adaptation means that adaptation is able to occur when the software is up and running. Naturally, the runtime adaptation approach can perform adaptation during the start-up but the adaptation is not limited to that particular moment. Hence, the runtime adaptation refers to a more dynamic adaptation approach than the start-up, adaptation in this comparison. Furthermore, the runtime adaptation is able to occur reactively and/or proactively. As mentioned earlier, the reactive adaptation takes place when an attack is identified. On the opposite, the proactive adaptation predicts threats and adapts the software beforehand.(iii)Monitoring and analyses describes how the adaptation approach recognises a need for adaptation. Each adaptation approach uses some method to collect information from its behaviour and/or environment. The collected information is aggregated or utilised as such in order to detect when the adaptation is required. The purpose of this property is to compare, which kind of information the adaptation approach requires, how the monitoring is performed, and what kinds of analyses will be performed for making adaptation decision from information.(iv)Planning and execution describes how the adaptation approach makes an adaptation plan; that is, what object will be adapted and how and how the adaptation plan is enforced. In static solutions, the adaptation approach contains a set of predefined adaptation plans for different situations. Alternatively, the adaptation is dynamic, and the adaptation plan is made at runtime based on utility function or other models.(v)Knowledge describes how the required input knowledge is provided for the adaptation approach. As mentioned earlier, each phase of the adaptation loop requires certain knowledge. It is possible that the required knowledge is hard coded inside the adaptation phases or alternatively a dedicated knowledge storage is utilised.(vi)Self-properties describe which kind of self-properties the adaptation approach supports. (1) Self-configuring refers to a capability to automatically change a used software component. (2) Self-optimization is a capability to set and modify parameters. (3) Self-protection is a software capability to protect itself from the security breaches and mitigate attack effects. (4) Self-healing refers to a capability to determine failures and perform required recovering actions. (5) Self-awareness is a software capability to know its own state and behaviour. (6) Context-awareness means, in this comparison, a capability to know the operational environment of software.
Security Viewpoint. The purpose of this viewpoint is to collect security related features from the compared approach. The following properties are included.(i)Attributes list which security attributes are supported in an adaptive manner. Supported security attributes can be confidentiality, integrity, availability, authentication, and authorisation. If an adaptation approach focuses on one or few security attributes, the other security attributes are considered static ones. On the contrary, generic adaptation approaches do not nominate adapted security attributes.(ii)Mechanisms describe security mechanisms which can be adapted. Hence, this property is closely related to object to adapt—described above. It is possible to implement support for the same security attribute with several security mechanisms. The adaptation can concentrate on tuning the parameters of one specific mechanism (self-optimization), or, alternatively, the adaptation is able to change between different mechanisms (self-configuring). If the adaptation approach is tightly coupled to a particular security mechanism it affects generalization possibilities.(iii)Protected assets describe assets that are intended to be secured by means of the security adaptation approach. If a security adaptation approach does not explicitly define assets, the protected asset is identified implicitly.(iv)Threats describe which security threats the adaptation approach is able to cover in an adaptive manner. Security threats will be reasoned from the available material if not stated explicitly.
Lifecycle Viewpoint. The viewpoint concentrates on the aspects of the software development. From the evolution qualities, listed in Section 2, the framework contains reusability, extendibility, and flexibility. The lifecycle viewpoint has the following properties.(i)Architecture describes how the elements of the adaptation loop are connected and related to each other. Moreover, the architecture shows how the adaptation approach and software functionality are coupled. In some approaches adaptation mechanisms are built inside the software functionality, called internal approaches by Salehie and Tahvildari in . On the contrary, external approaches separate adaptation mechanisms and software functionality to their own parts to support maintainability and reuse.(ii)Extensibility describes an ability to utilise new components and perform new functions. Different needs to extend the security adaptation approach can appear. Support for a new security mechanism or a security attribute is the most obviously required during the lifecycle. However, extension needs for adaptation mechanisms can also emerge. For example, a need to add a new monitoring component, a different analysis method, or a more sophisticated planning algorithm.(iii)Reusability describes how easily the components of the adaptation approach or the whole approach can be reused as such or slightly modified. Possibility to reuse components from the approach is strongly related to the selected architecture.(iv)Flexibility describes how easily the adaptation approach can be modified for use in a different environment. Some approaches are tightly coupled to the specific domain or environment. On the other hand, generic approaches can be utilised in various environments without major modifications. Again, the selected architecture dictates how flexible the solution is.(v)Maturity describes performed validation, available implementation guidelines, and current activity of the adaptation approach. Validation describes a type of validation performed for the approach. In case the validation is performed, the number of validation use cases and usage environment varies. Next, the implementation guidelines and reusable components offer valuable support for a new user. The type of publishing forum indicates the maturity of the approach. Conference papers typically introduce less mature solutions than journal papers. Similarly, current activity is an important attribute from the maturity viewpoint. A live community or project, behind the security adaptation approach, offers updates and support for the utilisation of the approach.
The presented framework is intended to offer a universal way to compare the properties of different security adaptation approaches. It is possible to argue the structure of the framework because its three viewpoints contain some overlapping. However, the purpose of the framework is to compare and raise up the differences between approaches, and, thus, all the attributes are vital instead of the most compact framework. In the next section, the above described information is collected from each adaptation approach and presented in Tables 1–5.
4. Security Adaptation Approaches
In this section, five security adaptation approaches are covered. Moreover, details selected to the comparison framework are described. Each approach is presented in its own subsection containing a description part and a data collection for the comparison framework.
4.1. An Architectural Approach for Self-Managing Security Services
In  Russello and Dulay present an architectural approach for self-managing security services by means of ESCA (Event, State, Condition, and Action) policies. The approach separates application logic to an application layer, and adaptation elements are located on a middleware layer. Furthermore, security mechanisms are separated from the application logic to the middleware layer. Authors have designed the middleware architecture to comply the Shared Data Space (SDS) model. The architecture contains a kernel component on the middleware layer and a proxy element, which handles communication between application and the kernel. The kernel component consists of three parts, that is, Operation subsystem, Security subsystem, and Context subsystem. The Operation subsystem contains functionality required to implement the SDS model, and thus, it is not directly related to security adaptation. On the contrary, the Security subsystem contains all security related parts. The first part is ESCA Policy manager, which manages and enforces policies. The ESCA policy language and enforcement are described in . The second part of the Security subsystem is the set of security mechanisms, which are adapted based on the selected policy. Authors mention mechanisms for authentication, authorization, encryption, and fault tolerance. The approach is intended to be generic, and, thus, these mechanisms are not named in more detail. The final subsystem in the kernel component is the Context subsystem, which contains a set of services to provide contextual information for the Operational and Security subsystems. Authors present the following context services: Trust level, Threat level, Availability monitor, Memory monitor, and Bandwidth monitor service.
Authors have emphasised the importance of the separation of concerns. Hence, the presented approach is clearly structured to dedicated subsystems and components. However, mutual behaviour of these subsystems are not emphasised as much. The ECSA Policy manager is validated by means of the case study in [27, 28] but without security adaptation viewpoint. The validation focuses on sense and react applications, which utilise wireless sensor networks. Table 1 collects data by means of the comparison framework.
4.2. A Software Framework for Autonomic Security in Pervasive Environments
Saxena et al. present a software framework for autonomic security in . The framework consists of an adaptation loop with monitoring, analysing, and responding modules. An architecture, which contains these modules and their mutual interactions, is also described. The monitoring modules are registered to observe security related events, called security context. Authors give an example list of security events, for example, new authentication schema available, user location change, low memory, and so forth. Hence, security-relevant events can occur in a device or execution environment. The analysing modules subscribe to events recognised by monitoring modules. Based on the received events, each analysing module suggests a high-level security action to reconfigure the system. The responding module maps these high-level security actions to implementation specific sub-systems, for example, communication, device authentication, application, and so forth. Authors give an example from the high-level security action “increase encryption strength”, which can be mapped to implementation to increase key size or perform additional encryption rounds. Added to monitoring, analysing, and responding nodes, the framework contains a support module. The support module offers a profile database for other modules. Events from the monitoring modules can be stored in the profile database for the future use. Moreover, the database provides information for the analysing modules to support decision making. Finally, the responding node stores the information of the current configuration in the profile database.
The framework is applicable to adapt both an individual device and the whole network. Both adaptation alternatives utilise the similar adaptation loop. In  He and Lacoste present these adaptation loops as the nested adaptation loop. Even though, the device level adaptation can be utilised independently, without network level adaptation. The framework is implemented on the Nokia 770 Internet tablet; that is, the required components exist. However, any use case that utilises the framework is not presented. Table 2 collects data by means of the comparison framework.
4.3. Context Sensitive Adaptive Authentication
Hulsebosch et al. took initial steps towards the adaptive security approaches in the paper “Context Sensitive Access Control”  by developing context awareness for security purposes. However, the main focus was in access control mechanism with context information without adaptation. Further development is made in , which presents the context sensitive adaptive authentication. The purpose of the approach is to make static security mechanisms more adaptive and less intrusive. The adaptive authentication approach utilises context information, namely, location and time, for adaptation purposes. Authors concentrate on user authentication. Hence, they propose that “where the user is and when” is one authentication class added to conventional “what the user is”, “what the user has,” and “what the user knows.” The main idea is to approximate the authentication confidence with the probability of the user being at a certain location in an authentication time. Location information from user’s different identity tokens is composed, which indicates the probability of being in the certain location. The authors present a fusion algorithm, which calculates a probability that the user is in a certain location based on the location information of user’s devices. The results of the fusion algorithm are simulated by using the locations of the Bluetooth device and RFID badge in two different situations. In the first case, the location of Bluetooth device and RFID badge overlaps, and this increases authentication confidence. In the second case, devices’ locations are not overlapping, causing decreased authentication confidence. Context management framework collects context information, that is, location information with timestamps, which are provided for the User Location Probability Calculator (ULPC). The ULPC component calculates probability values with the developed fusion algorithm, and the result is delivered to the application. The application utilises this information to decide the current authentication level. If the authentication level is not sufficient, the application uses alternative authentication mechanisms, for example, asks username and password. The approach is further developed in [31, 32] by developing an alternative fusion algorithm, which concentrates on the trust of location sensors. However, the initial idea of an adaptive authentication is similar regardless of the used fusion algorithm.
Based on the description given in papers [12, 31, 32] much focus has not been given for the security adaptation. This does not mean that the paper has not merited itself, but emphasis of the paper is more for context awareness and how to apply it. The presented approach can be thought as a means to monitor environment or user’s behaviour or an authentication mechanism. Naturally, the monitored information can be utilised during the adaptation. Table 3 collects data by means of the comparison framework.
4.4. Adaptive Messaging Middleware
Abie et al. presents self-healing and secure adaptive messaging middleware called Genetic Messaging-Oriented Secure Middleware (GEMOM) in . The GEMOM is a message oriented middleware that utilises the publish-subscribe messaging paradigm. The purpose of self-healing and adaptation features of the GEMOM is to ensure optimal security strength and uninterrupted operation in changing environments and threats. Self-healing is achieved by means of replication. In other words, GEMOM heals from faults by deploying a new instance from the messaging system at runtime by utilising the replica nodes of the system. In the GEMOM the Adaptive Security Manager (ASM) component is intended to perform tasks related to adaptation. Hence, the ASM monitors and analyses security, plans actions, and executes planned actions. The high-level internal structure of the ASM is described in . The structure follows a generic adaptation loop. However, plan and execute phases are combined into one adaptation phase. Moreover, learning functionality is added into the analyse phase. Unfortunately, the learning part is not described in more detail. In the presented approach monitoring is performed by means of anomaly detection, Quality of Service (QoS) monitoring, and security measuring. From these monitoring techniques the security measuring is described in more detail in . The same paper also describes how different measurements are combined to higher level security indicators. State of Security (SoS) is constituted from the security indicators. The purpose is to maintain past, current, and predicted SoS information, which is used for reactive and/or proactive security adaptation. Table 4 collects data by means of the comparison framework.
4.5. Adaptive Security Architecture Based on EC-MQV Algorithm in Personal Networks (PN)
The approach is presented by Mihovska and Prasad in . The proposed approach is a three level architecture for context-aware asymmetric key agreement. Three level means in this approach that three different algorithms for key agreement and exchange exist. The approach concentrates on the key agreement method in elliptic curve cryptosystem (EC-MQV) in Personal Network (PN). The PN contains a dynamic set of nodes and devices around a user and remote nodes and devices (from home, office, etc.) connected with the Internet or cellular network. Context-aware Security Manager (CaSM) manages security parameters used in the PN, and thus makes it possible to achieve adaptive security. Firstly, each device joining to the PN has to register for the CaSM, which inspects the device and stores its information. In the first phase, the long term shared key pair is created between device and CaSM. In the second phase, the shared secret key is created for devices communicating in the PN. A new shared secret key is created for each session. For each session, the used key agreement method depends on location and device constraints, that is, context information. In the presented approach, location information indicates different security levels. Hence, in the home environment the one-pass key agreement algorithm is used, which offers the lowest security level. In the known cluster, for example, office, the two-pass key agreement algorithm is utilised, which supports medium level security. Finally, in public networks the highest security level is required, which is achieved by means of the three-pass key agreement algorithm. Consequently, the main emphasis is on the key agreement. Unfortunately, the adaptation aspect has not got much visibility in the paper. Table 5 collects data by means of the comparison framework.
5. Comparisons and Analysis
This section collects all data from the approaches in a concise form (cf.) Table 6. The purpose of the table is to facilitate comparison of the presented approaches. In the table a line symbol (—) indicates that the aspect is not covered in the approach. In order to compare extensibility and flexibility, the following grades will be used: extensibility grades Completely, Partially, and No and Flexibility grades Easy, Moderate, and Hard.
For extensibility, completely means that the approach can be extended from the security and adaptation viewpoints. In other words, it is possible to add support for new security attributes and mechanisms. Moreover the approach can be extended to use new adaptation techniques, that is, new monitors, analyses, and planning algorithms. A Partially grade means that the approach supports either security or adaptation related extensions. Finally, a No grade means that extensions cannot be made or those are laborious. Similarly, flexibility is graded to three levels Easy, Moderate, and Hard. An Easy grade means that utilising the approach in a new usage environment or situation requires only slight modifications. Hence, the architecture follows the separation of concerns principle and required modifications are anticipated. A Moderate grade means that the approach can be applied in different environments. However, the amount of required changes is higher, or, alternatively, the selected architecture complicates changes. Lastly, a Hard grade implicates that the approach is closely related to particular environment or usage. Thus, changing the usage environment or situation will be laborious.
After the comparison table, the differences of these approaches are analysed.
This paper developed the framework to compare security adaptation approaches. The framework contains three viewpoints, that is, adaptation, security, and lifecycle. Each of these viewpoints brought out different aspects from the compared security adaptation approaches.
In the adaptation viewpoint, the first attributes, that is, object to adapt and adaptation timing, do not show any surprises. In the presented adaptation approaches the most emphasis is put to the monitor and analyse phases. In other words, each approach monitors own behaviour and/or environment somehow and makes an adaptation decision based on the monitored data. The monitored data varies a lot from an approach to another. In some approaches, the monitoring concentrates on context related data, for example, location. In contrast, some approaches monitor directly security threats. Several aspects can affect security; and thus, it is natural that monitored attributes vary between approaches. However, means of how the monitoring has to be performed is not described in these approaches. In other words, it is not always clear which kind of sensor, software or hardware, is needed to collect the required information.
Most of the compared approaches contain a separated component for the analyses. This separation facilitates reusing and extensibility. However, the approaches do not emphasise the content of the analyse phase as much as the monitoring phase. Approaches 3 and 4 (cf. numbers in the first row in Table 6) describe also the analyse phase. The approach 3 presents the fusion algorithm to compose location information, but the final authentication level has to be decided in the application. The approach 4 presents formulas to compose the results of security metrics, collected during the monitoring phase. On the contrary, the approach 5 utilises beforehand performed analyses; that is, security levels for different locations are decided statically beforehand. Finally, approaches 1 and 2 do not describe how the analyses are performed.
Moving forward to planning and execution reveals that these are the most uncovered phases of the adaptation loop in these approaches. Approaches 1 and 5 utilise static planning; that is, the adaptation plan is created beforehand and described in a policy description or in application logic. Other approaches do not describe algorithms or rules to define adaptation plans. Furthermore, means of how the adaptation plan will be enforced is not described, that is, the execution phase. This phase should not be underestimated because each adaptation plan might cause side effects and require totally different actions. For example, optimising parameters for a TLS connection requires that the whole connection is recreated, which in turn might affect application functionality. The approach 5 covers also the execution phase. However, the approach is intended to start-up phase adaptation; that is, adaptation occurs when a new connection is established. Hence, it is not proper to compare this approach to runtime phase adaptations.
As mentioned in Section 2, the adaptation loop requires also knowledge. Naturally, each adaptation approach contains the required knowledge in some form. However, the utilised knowledge, and how it is stored, is not described at all. Approaches 2 and 4 contain a database, which offers input for the adaptation phases. However, the content of the database is not described in these approaches. Approach 1 contains knowledge in policies and in the monitoring services. In this approach policies describe what security mechanism to use in different situations. Furthermore, monitoring services also perform the analyse phase, and required knowledge is integrated in these services. Thus, approaches 1, 2, and 4 address to place where the knowledge can be stored. Nevertheless, none of these approaches describe which kind of knowledge is required during the adaptation.
Utilising some adaptation approach requires that the knowledge is also available. If software components for the adaptation approach are available, but the required knowledge is missing, the components are useless. Without knowledge each component requires hard coded, that is, integrated, decisions to describe what to monitor, how to compose monitoring results, and how to adapt. Naturally, hard coding all these decisions is not sufficient for the autonomous adaptation. In other words, hard coding leads the adaptation approach with the huge set of if-then-else clauses, which is fairly a static approach and not flexible, extensible, or reusable as such. Hence, separating knowledge from the adaptation phases, instead of integrating them together, makes it easier to achieve evolution qualities. Furthermore, the lack of knowledge models is other aspect that complicates the achievement of evolution qualities. The knowledge model, which describes the form of the knowledge explicitly will facilitate the evolution of the approach.
Table 7 summarises the contributions of the presented approaches from the adaptation loop viewpoint. As a summary, in these security adaptation approaches, the most effort is put to first phases of the adaptation loop. Therefore, the plan and execute phases and knowledge are bypassed and described only with few sentences.
From the security viewpoint, the compared security adaptation approaches are quite similar. Approaches are either generic; that is, approaches are not intended to specific security attributes or mechanisms. Or alternatively, approaches are clearly dedicated for particular adaptations. Approach 5 is the most specified one, concentrating on the particular key agreement algorithm. In contrast, the approach 3 is intended to authentication but the authentication mechanism is not bound. Other three approaches are generic, and thus, the end user is able to utilise these approaches for various security attributes. However, it can be assumed that more implementation effort will be needed when utilising a generic approach instead of more specific one.
In the lifecycle viewpoint, architecture is the first comparison attribute. The used architecture affects also the achieved evolution qualities. Architecture structure is described in each approach. In other words, at least required software components are described. Furthermore, approaches 2 and 3 describe also architecture behavior; that is, how components call each other.
The level of extensibility varies between the approaches. Approaches 1 and 2 support extensibility completely; that is, approaches can be extended from the security and adaptation viewpoints. On the other hand, approaches 4 and 5 offer partial extensibility. In the approach 4 monitoring part, called security metrics in the approach, can be extended. On the contrary, the approach 5 can be extended to new security mechanisms. Finally, the approach 3 is strongly designed for the authentication adaptation with the particular fusion algorithm, and thus extending the approach will be laborious. Similarly, flexibility varies between approaches. Approaches 1, 2, and 5 can be easily applied to a new environment or usage situation. The approach 4 supports flexibility in a moderate level. The approach is already utilised in various environments. However, in this approach the selected architecture causes challenges for the flexibility. The approach 3 is strongly bound to location and timestamps context information. Therefore, utilising the approach in other way will need a lot of work. Finally, reusability of approaches is broadly supported. Approaches 1 and 2 can be reused as a whole, or alternatively, separated components can be reused. On the other hand, approach 3 can be reused as a whole. In this approach individual components are strongly related to the approach, and thus reusing them as such is not reasonable. Lastly, approaches 4 and 5 contain individual components, which can be reused easily.
In the lifecycle viewpoint, maturity attribute revealed that the validation is performed for approaches 3 and 4. On the contrary, none of these approaches provides support via a software community or even user guidelines. The lack of communities and guidelines is natural because each of these approaches is presented in research papers. Thus, utilising these approaches will be challenging and requires a lot of work. However, it is notable that authors of these approaches do not claim that their solutions are ready for the wider usage.
5.2. Future Work
The existing security adaptation approaches do not cover the whole adaptation loop and the knowledge part. Nevertheless, each approach offers viable components and ideas for the further utilisation. For instance, combining presented monitoring and analysing techniques will provide a novel security adaptation solution. Consequently, the new solution is able to observe security relevant attributes and events broadly from the execution environment and security mechanisms.
In the future it is important to achieve a solution, which is able to cover the whole adaptation loop. One possibility to achieve an appropriate solution is to utilise bottom-up development and reusing the existing approaches. Consequently, all elements are not needed to reinvent, and the evolution of the existing components will benefit the developed approach. Developing the whole adaptation loop for one security attribute first and then extending the approach for other attributes might be a reasonable way to proceed. Hence, each security attribute can be covered in detail level. Naturally, the utilised architecture has to be selected carefully, in order to ensure extensibility of the approach with new security attributes. In contrast, developing the whole adaptation loop in a generic manner, for all security attributes at once, will be challenging and error prone process.
As mentioned above, more emphasis is especially needed to planning and knowledge parts. For the planning, one possibility is to utilise existing decision making algorithms, developed in the autonomic computing field. Furthermore, applying techniques utilised in other adaptation approaches, for example, in performance adaptation, is able to offer valuable input for planning in security adaptation. For the knowledge part, required research is twofold. Firstly, the content of required knowledge has to be defined explicitly. Naturally, collecting knowledge has to be an iterative process; that is, new knowledge is defined in small pieces and the existing knowledge is updated. At least the following knowledge is needed: applicable sensors and what they monitor, how to use monitored data in the analyses, and how to create the adaptation plan. In other words, different knowledge is needed in different phases of the adaptation. In addition, security knowledge has to be included, containing descriptions of security attributes, mechanisms, and their relationships. Similarly, knowledge of the execution environment is needed, that is, context information. Secondly, developing the knowledge base means that an appropriate format for the knowledge is selected. This is a technical viewpoint for the knowledge base development, containing data formats and used database solutions. However, this is important selection because it dictates how easily the knowledge base can be updated in the future.
Consequently, additional research is needed in order to combine monitoring and analysing phases coherently to planning and execution phases and support the adaptation with appropriate knowledge. Hence, the cross-discipline research and cooperation between security and autonomous computing experts are required. Furthermore, achieving reusable and long-lasting security adaptation approach demands that architecture and software development aspects are perceived during the future research.
Achieving appropriate security in dynamically changing environments and threat landscapes is not possible by means of static and beforehand selected security mechanisms. Therefore, information security has to be implemented in a self-adaptive way. In adaptive security, utilised security mechanisms can be changed and modified at runtime, and thus all security related decisions are not bound at design-time.
This paper developed a framework to compare security adaptation approaches. The framework constitutes of three viewpoints, namely, adaptation, security, and lifecycle viewpoints. The adaptation viewpoint concentrates on the used adaptation model. The security viewpoint covers software security related properties. Lastly, the lifecycle viewpoint compares architectures, evolution qualities, and maturity. The framework was utilised to compare five security adaptation approaches. The comparison showed that the monitor phase is described in each adaptation approach and most approaches contain also the analyse phase. However, the plan phase, where a decision of how to adapt will be made, is not described on an adequate level. In addition, the compared approaches do not offer means to manage knowledge required during the adaptation, or, alternatively, the content of the required knowledge is not described. Hence, these two areas have to be researched more in the future. In order to achieve coherent security adaptation approach, the cooperation between autonomous computing, security, and software development experts will be needed. As a result of collaboration, the approach which is secure, easy to utilise, and contains necessary adaptation aspects in the same time can be achieved.
This work has been carried out in the SOFIA ARTEMIS project (2009–2011) and SASER-Siegfried Celtic-Plus project (2012–2015) funded by the Finnish Funding Agency for Technology and Innovation (Tekes), VTT Technical Research Centre of Finland, and the European Commission.
- J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” Computer, vol. 36, no. 1, pp. 4–50, 2003.
- M. C. Huebscher and J. A. McCann, “A survey of Autonomic computing—degrees, models, and applications,” ACM Computing Surveys, vol. 40, no. 3, article 7, 2008.
- S. Dobson, S. Denazis, A. Fernández et al., “A survey of autonomic communications,” ACM Transactions on Autonomous and Adaptive Systems, vol. 1, no. 2, pp. 223–259, 2006.
- M. Salehie and L. Tahvildari, “Self-adaptive software: landscape and research challenges,” ACM Transactions on Autonomous and Adaptive Systems, vol. 4, no. 2, article 14, 2009.
- A. Elkhodary and J. Whittle, “A survey of approaches to adaptive application security,” in Proceedings of the Software Engineering for Adaptive and Self-Managing Systems Workshop, pp. 16–23, Minneapolis, Minn, USA, May 2007.
- B. Hashii, S. Malabarba, R. Pandey, and M. Bishop, “Supporting reconfigurable security policies for mobile programs,” Computer Networks, vol. 33, no. 1, pp. 77–93, 2000.
- W. Hu, J. Hiser, D. Williams et al., “Secure and practical defense against code-injection attacks using software dynamic translation,” in Proceedings of the 2nd International Conference on Virtual Execution Environments, pp. 2–12, ACM, Ottawa, Canada, June 2006.
- J. C. Knight and E. A. Strunk, “Achieving critical system survivability through software architectures,” in Architecting Dependable Systems II, R. Lemos, C. Gacek, and A. Romanovsky, Eds., pp. 51–78, Springer, Berlin, Germany, 2004.
- T. Ryutov, L. Zhou, C. Neuman, T. Leithead, and K. E. Seamons, “Adaptive trust negotiation and access control,” in Proceedings of 10th ACM Symposium on Access Control Models and Technologies, pp. 139–146, Yorkshire, UK, June 2005.
- G. Russello and N. Dulay, “An architectural approach for self-managing security services,” in Proceedings of the IEEE International Conference on Advanced Information Networking and Applications Workshops, pp. 153–158, Bradford, UK, May 2009.
- A. Saxena, M. Lacoste, T. Jarboui, U. Lücking, and B. A. Steinke, “Software framework for autonomic security in pervasive environments,” in Information Systems Security, P. McDaniel and S. Gupta, Eds., pp. 91–109, Springer, Berlin, Germany, 2007.
- R. Hulsebosch, M. Bargh, G. Lenzini, P. Ebben, and S. Iacob, “Context sensitive adaptive authentication,” in Smart Sensing and Context, G. Kortuem, J. Finney, R. Lea, and V. Sundramoorthy, Eds., pp. 93–109, Springer, Berlin, Germany, 2007.
- H. Abie, R. M. Savola, J. Bigham, I. Dattani, D. Rotondi, and G. Da Bormida, “Self-healing and secure adaptive messaging middleware for business-critical systems,” International Journal On Advances in Security, vol. 3, pp. 34–51, 2010.
- A. Mihovska and N. R. Prasad, “Adaptive security architecture based on EC-MQV algorithm in personal network (PN),” in Proceedings of the 4th Annual International Conference on Mobile and Ubiquitous Systems: Computing, Networking & Services (MobiQuitous '07), pp. 1–5, August 2007.
- ISO/IEC 9126-1, 2001 Software Engineering—Product Quality—Part 1: Quality Model, International Organization of Standardization, 2001.
- A. Avižienis, J.-C. Laprie, B. Randell, and C. Landwehr, “Basic concepts and taxonomy of dependable and secure computing,” IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp. 11–33, 2004.
- ISO/IEC 15408-1, 2009 Common Criteria for Information Technology Security Evaluation—Part 1: Introduction and General Model, 2009.
- G. Stoneburner, A. Goguen, and A. Feringa, “Risk management guide for information technology systems,” Tech. Rep. 800-30, 2002.
- A. Herzog, N. Shahmehri, and C. Duma, “An ontology of information security,” Journal of Information Security and Privacy, vol. 1, pp. 1–23, 2007.
- D. M. Chess, C. C. Palmer, and S. R. White, “Security in an autonomic computing environment,” IBM Systems Journal, vol. 42, no. 1, pp. 107–118, 2003.
- M. Svahnberg, J. Van Gurp, and J. Bosch, “A taxonomy of variability realization techniques,” Software, vol. 35, no. 8, pp. 705–754, 2005.
- E. Niemelä, A. Evesti, and P. Savolainen, “Modeling quality attribute variability,” in Proceedings of the 3rd International Conference on Evaluation of Novel Approaches to Software Engineering, pp. 169–176, Madeira, Portugal, May 2008.
- L. Bass, P. Clements, and R. Kazman, Software Architecture in Practice, Addison-Wesley, Boston, Mass, USA, 2nd edition, 2003.
- N. Yoshioka, H. Washizaki, and K. Maruyama, “A survey on security patterns,” Progress in Informatics, no. 5, pp. 35–47, 2008.
- T. Priebe, E. Fernandez, J. Mehlau, and G. Pernul, “A pattern system for access control,” in Research Directions in Data and Applications Security XVIII, C. Farkas and P. Samarati, Eds., pp. 235–249, Springer, Boston, Mass, USA, 2004.
- M. Matinlassi and E. Niemelä, “The impact of maintainability on component-based software systems,” in Proceedings of the 29th IEEE Euromicro Conference, pp. 25–32, Belek, Turkey, September, 2003.
- G. Russello, L. Mostarda, Dulay, and N. Escape, “A component-based policy framework for sense and react applications,” in Component-Based Software Engineering, M. Chaudron, C. Szyperski, and R. Reussner, Eds., pp. 212–229, Springer, Berlin, Germany, 2008.
- G. Russello, L. Mostarda, and N. Dulay, “A policy-based publish/subscribe middleware for sense-and-react applications,” Journal of Systems and Software, vol. 84, no. 4, pp. 638–654, 2011.
- R. He and M. Lacoste, “Applying component-based design to self-protection of ubiquitous systems,” in Proceedings of the the 3rd ACM Workshop on Software Engineering for Pervasive Services (ACM '08), pp. 9–14, Sorrento, Italy, July 2008.
- R. J. Hulsebosch, A. M. Salden, M. S. Bargh, P. W. G. Ebben, and J. Reitsma, “Context sensitive access control,” in Proceedings of 10th ACM Symposium on Access Control Models and Technologies, pp. 111–119, Stockholm, Sweden, June 2005.
- G. Lenzini, M. S. Bargh, and B. Hulsebosch, “Trust-enhanced security in location-based adaptive authentication,” Electronic Notes in Theoretical Computer Science, vol. 197, no. 2, pp. 105–119, 2008.
- J. M. Seigneur, G. Lenzini, and B. Hulsebosch, “Adaptive trust management,” in Self-Organising Software, G. Di Marzo Serugendo, M. Gleizes, and A. Karageorgos, Eds., pp. 379–403, Springer, Berlin, Germany, 2011.
- H. Abie, “Adaptive security and trust management for autonomic message-oriented middleware,” in Proceedings of the 6th IEEE International Conference on Mobile Adhoc and Sensor Systems, pp. 810–817, Macau, China, October 2009.
- R. Savola and H. Abie, “Development of measurable security for a distributed messaging system,” International Journal On Advances in Security, vol. 2, pp. 358–380, 2009.