Research Article | Open Access
A Norm Compliance Approach for Open and Goal-Directed Intelligent Systems
The increasing development of autonomous intelligent systems, such as smart vehicles, smart homes, and social robots, poses new challenges to face. Among them, ensuring that such systems behave lawfully is one of the crucial topics to be addressed for improving their employment in real contexts of daily life. In this work, we present an approach for norm compliance in the context of open and goal-directed intelligent systems working in dynamic normative environments where goals, services, and norms may change. Such an approach complements a goal-directed system modifying its goals and the way to achieve them for taking norms into accounts, thus influencing the practical reasoning process that goal-oriented systems implement for figuring out what to do. The conformity to norms is established at the goal level rather than at the action level. The effect of a norm that acts at the goal level spreads out at the lower level of actions, thus also improving system flexibility. Recovery mechanisms are provided to face exceptional situations that could be caused by normative changes. A case study in the field of the business organizations is presented for demonstrating the strengths of the proposed solution.
Intelligent systems are increasingly used with some degree of autonomy. Thus, their behaviour is not entirely defined by the designer, but it is the result of cognitive capabilities. Modern intelligent systems can determine by themselves the behaviour to adopt for achieving their objectives. In doing so, they could be involved in behaviours that in real-world life are regulated by more or less stringent norms that could produce different effects. For example, smart cars have to obey the city traffic laws. Intelligent information systems have to comply with data protection laws to manage private information. Social robots have to respect social norms during interaction with humans. Smart workflow management systems have to respect business rules to perform business processes and so on.
A growing issue in the field of artificial intelligence is how to ensure that the behaviour of intelligent systems complies with the normative environment they should operate so that these systems can be well-accepted and efficiently employed in real contexts of the everyday life .
In conventional approaches, norms are fully specified at design-time, and the system is designed in such a way that its behaviour does not violate such rules. These approaches, based on hard-coded static norms, are not practical solutions because all the possible situations that a system has to manage should be established at design-time. Otherwise, system redesigning is necessary . Most advanced approaches use model-checking techniques for verifying the system behaviour off-line. Such methods result in impracticable or limited when the autonomy of the system increases making its behaviours not entirely predicted. Moreover, because normative environments in which systems operate are increasingly dynamic, to avoid the shutdown of the system and its reconfiguration, norm compliance has to be guaranteed at run-time.
In this paper, we present an approach for ensuring system run-time compliance with a dynamic set of norms in the context of open and goal-directed systems. The conformity with norms is guaranteed at a higher level of abstraction (i.e., the goal level). Such an approach complements a goal-directed system modifying its goals and the way to achieve them for taking norms into account, thus influencing the practical reasoning process  that the goal-oriented systems implement for figuring out what to do. In our approach, we also faced the problem to manage normative changes during system execution that may produce system incoherence with existing norms providing recovery mechanisms. In this work, we refined some theoretical foundations that have been preliminarily presented in , and we introduced new ones for defining the algorithms that implement the proposed approach. A widely known case study in the context of business organizations  is also presented for illustrating the behaviour of a goal-oriented system that implements our algorithms.
The rest of the paper is organized as follows. Sections 2 and 3 present some grounding literature and the theoretical background of the paper, respectively. In Section 4, a general overview of the approach is presented. Sections 5 and 6 present the key concepts and the algorithms for normative reasoning. Section 7 illustrates a case study in the context of business organizations. Finally, in Section 8 conclusions are drawn.
2. Related Works
Norms like obligations, permissions, and prohibitions have been implemented in automatic systems to specify (un)desired or (un)lawful behaviours. Normative systems  are commonly defined as systems that specify every possible transition, whether or not that transition is considered to be legal or not. They determine which actions or which states should be achieved or avoided [7–9].
Much work has been done about normative frameworks in the field of Electronic Institutions or Virtual Organizations where norms have found a natural implementation. Only to cite a few, Alechina et al.  present a programming framework for developing normative organizations based on N-2APL, a BDI-based agent programming language supporting normative concepts such as obligations, prohibitions, and sanctions. In such a work, the interaction between agents and the environment is regulated by a “normative exogenous organization” which is defined using a set of conditional norms. A norm-aware deliberation approach is also proposed. It allows agents to determine the set of plans (adopted for satisfying a goal) of highest priority which does not violate higher priority prohibitions. In , Kollingbaum and Norman proposed the Normative Agent Architecture (NoA). It supports the implementation of norm-governed practical reasoning agents. NoA agents are motivated by norms to act. In the NoA language, all the effects of a plan are declared in a plan specification. These effects are considered by agents for reasoning about plan selection and execution. Moreover, the norms governing the behaviour of an NoA agent refer to actions that are obligatory, permitted, forbidden, or states of affairs that are obligatory, permitted, or forbidden. The NoA language enables an agent to be programmed in terms of plans and norms. Normative statements formulated in the NoA language express obligations, permissions, and prohibitions of an agent. In , the problem of regulating the operation of open multiagent systems in which multiple interrelated activities take place is addressed, thus involving the distributed management of norms. Authors propose normative structures as a means to observe and manage the evolution of normative positions in each activity and their propagation in distributed activities. In , authors treated a computer supported cooperative work (CSCW) system as an organization in a society to use the abstraction of an organizational model. Hence, they propose a logical predication-based organizational model with an organizational state machine (OSM) to describe norms in CSCW systems. The organizational model allows describing the behavioural rules of roles, and the OSM allows for checking the logical conflict among the rules.
Some other works have been conducted for addressing norm change and norm consistency providing agents with mechanisms for enacting behaviour modification. Typically, new plans/actions have been created to comply with new norms [14–16]. In , Jiang et al. propose a normative structure, named Norm Nets (NNs) for modeling sets of interrelated regulations. NNs are aimed at verifying whether executions of business processes complied with process regulations. Authors define a norm as a tuple of elements that specify the type of deontic operator, the pair role-action (the target) to which the deontic modality is assigned, a deadline of norm validity, and a precondition that determines when the target is initiated. A formal method for checking norm compliance by using Colored Petri Nets is proposed. In [18, 19], authors propose a means for automatically detecting and solving conflict and inconsistency in norm-regulated Virtual Organization and Electronic Institution.
This work is developed in the context of the open and goal-directed systems that are able to work in dynamic normative environments and to organize their behaviour according to environmental changes.
The approach we propose is founded on a norm compliance algorithm, starting from the knowledge about norms, goals, and state of the world and acting at the goal level, which allows the system to plan the right behaviour in conformity with the normative context. Such an algorithm also allows the system to address exceptional situations that may occur, mainly when several norms act simultaneously. Three possible cases we considered in detail: inconsistent norms that contain a logical contradiction, the presence of an antinomy, namely, a conflict between two norms that are mutually exclusive, and, finally, norms that are incompatible with system requirements, in other words, that make goals not satisfiable.
This section is organized in two parts. The first one introduces norms and normative reasoning. The second one gives some details about antinomies in the legal theory.
3.1. Normative Reasoning
Norms are everywhere in our daily life. We are obligated to follow the traffic regulations during driving. We have to respect contractual restrictions at work. We try to follow etiquette to be well-accepted in society, and so on. These examples spread from hard to soft constraints. Generally speaking, norms can be seen as normative rules used for governing conducts, procedures, or state of affairs within a particular sphere of knowledge. The normative reasoning is the reasoning conforming to or based on norms. It is formally studied through the deontic logic , which is the study of the logical relationships among propositions asserting that specific actions or state of affairs are obligatory, forbidden, or permitted. From the application point of view, deontic logic is the logic that deals with actual as well as the ideal behaviour of systems . Differently from the classic logic, when working with norms the main issue to be considered is that norms are neither true nor false. For overcoming this issue, a common approach is to consider the deontic logic as a logic of normative propositions. The underlying concept is that, even if norms are neither true nor false, someone may state that something ought to be done (according to norms): the statement Mary ought to pay the taxes is, then, true or false description of a normative situation .
Standard deontic logic [23, 24] is the most studied system of deontic logic and one of the first deontic logics axiomatically specified. It is obtained from the modal logic and employs three modal operators , , and , where means that it is obligatory that A, means that it is permitted that A and that it is forbidden that A. Let be primitive, the operators P and F can be defined by the equivalences and , where PA means that it is not obligatory that not A and FA means that it is obligatory that not A. In this context, A designates a proposition that asserts that an act of the sort A is done. Thus, OA is read as “it is obligatory that the situation described by the descriptive sentence A is realized”.
3.2. Antinomy in Legal Theory
In legal theory, an antinomy is defined as an incompatibility relation between two norms belonging to the same legal system. If we refer to the classic logic, the incompatibility between two propositions is determined by the impossibility that they are both true. In legal theory, the concept of incompatibility is founded on the deontic logic, which refers to the obligatory of the prescriptive assertions.
The deontic square of oppositions  shows all the possible relations between two norms in terms of obligations, prohibitions, permissions, and negative permissions (see Figure 1). The most common types of oppositions between the four modalities are as follows: (1) Incompatibility between a norm that prescribes and a norm that prohibits . Only one of them may be in effect at any time.
(2) Incompatibility between a norm that prescribes and a norm that allows not . An action is either obligatory or not. Omissible corresponds to the absence of an obligation (may must(X)) and vice versa.
(3) Incompatibility between a norm that prohibits and a norm that allows . An action is either forbidden or allowed. Permission therefore corresponds to the absence of a prohibition (may(X) (must )). For solving antinomy, three criteria are adopted in legal theory. The legis posterior that states the younger law overrides the older law. The legis specialis establishes that a law governing a specific subject matter (lex specialis) overrides a law which only governs general matters (lex generalis). Finally, the lex superior derogat legi inferiori principle states that higher law overrides the lower law, because a legal system is commonly based on a power hierarchy. The proposed approach takes inspiration from this theory for addressing conflicting situations in the context of open and goal-directed intelligent systems.
4. Overview of the Proposed Approach
As previously said, the proposed work is developed in the context of open and goal-directed intelligent systems that work in dynamic normative environments. In this kind of systems, goals can be seen as motivators that provide them with the reason for doing something. Moreover, the open systems we considered may evolve at run-time because (i) new services could be made available for satisfying existing goals and (ii) new goals may be required to the system.
Hence, working in dynamic normative environments with systems that may evolve their behaviour requires new methods able to ensure norm compliance also for norms and system behaviours that are not defined at design-time. For addressing such issues, our approach is based on a triplet of elements, namely, state of the world, goal, and norm. A state of the world represents the particular conditions of the system and the context in which it works in a specific time. A Goal expresses the desired state of the world the system wants to achieve when certain conditions are verified. Finally, Norms regulate the state of the world using obligations, permissions, and prohibitions. In particular, we considered two different cases. In the first case, obligation and prohibition norms modify the desired state of the world according to the admissible state expressed by obligations or prohibitions. Permission may or may not change the desired state of the world. In the second case, norms are considered as promoters or inhibitors of the system in pursuing goals. Practically, a permission relaxes the constraints expressed by the conditions under which a goal has to be satisfied. Conversely, a prohibition nullifies the commitment with a goal under the circumstances defined by the prohibition norm. Finally, an obligation introduces further conditions under which a goal has to be reached. In this vision, the effect of the norms is that they may increase the possibility for the system to pursue a goal (permissions). Conversely, they may inhibit system intentions to pursue a goal (prohibitions). Finally, norms may force the system to pursue a goal (obligations). The main feature of the proposed approach is to obtain a norm compliance behaviour by exploiting normative reasoning applied to the state of affairs a system wants to achieve. Thus, norms regulate the system at goal level providing some advantages and overcoming some limitations of conventional approaches.
For illustrating typical problems, we provide some examples (some norms used in the following examples are inspired to existing real norms). Let us suppose an intelligent system able to plan and organize a personal agenda of a user. Let us suppose that such a system can satisfy the user goal I want to go to country A. We also assume the user has already obtained the entry visa for the country A that is a prerequisite for achieving . Let us also suppose that, at the moment of the goal commitment, the system knows only two ways for satisfying the goal : by booking a flight or a train. Figure 2 shows a goal model with services the system may use for producing a postsituation which satisfies . We have also depicted two levels of abstractions to empathize the difference between goals and services.
Let us suppose that a norm states that “It is prohibited that a person visits the country_A if he has visited a country_B”. Let us assume that the user has already visited the country_B. Thus, the presence of such norm does not permit the user to go to country_A. For complying with such a normative requirement, the system does not have to plan the user trip. The proposed approach allows the system to revise its current commitment to that goal. Hence, rather than disabling all the possible ways the system can follow to satisfy the goal, the norm applied at the goal level does not allow the system to pursue the goal because it inhibits its intentions.
Let us assume, instead, that the personal agenda is committed to fulfilling two user goals, go to country_A and go to country_B. For complying with , the personal agenda has to be able to plan the trip for country_A firstly and then the trip for country_B. Our approach allows the system to reason about the effect of the norm on its desired state of the world. Because choosing firstly to pursue the goal go to country_B leads the system in an unlawful state of the world, the system will plan the two goals opportunely.
Let us suppose now that a new service “rent a car” is available at run-time for satisfying the goal “go to (country)” (see Figure 3). The run-time introduction of this new element does not involve any change in the system configuration.
We do not need to modify anything to adapt the behaviour of the system to manage this new situation because the norm defined at the goal level spreads to the service level. The approach we propose allows maintaining norm compliance although the service level is changed.
Conversely, let us suppose that a norm states that “In country A, it is prohibited to enter foreign cars”. Let us suppose that available rent car services do not have cars produced in the country A. This kind of norm does not prohibit to pursue the goal , but it has effects on it. In our approach, such kind of norm introduces constraints to the final state of the world the system wants to reach. Thus, the system will choose the book a flight or book a train service in order to achieve .
Finally, let us suppose that a norm “It is permitted that a person goes to country_A if he is a citizen of a member state of the organization X” is at run-time introduced in the system. We also assume that the user has not an entry visa for the country A, yet. The single effect of on the system (i.e., without considering the presence of and ) is to relax the conditions under which the goal has to be satisfied. The system could plan to fulfill the goal go to country_A also without its prerequisite which is satisfied (i.e., without an entry visa for Country A). On the contrary, the simultaneous presence of and creates a joint effect on the system because they affect the same goal “go to country_A”. In particular, the injection of could cause a system deadlock. Indeed, if the conditions of and are simultaneously valid for the user, an antinomy is generated, and the system does not know how to behave. The compliance with causes to be uncompliant with . It is a classic example of a conflict generated when two norms are contradictory. Our normative reasoning approach implements recovery criteria for addressing such situations. In particular, in this situation is a norm defined by superior institutional power. Thus it should prevail on the second one.
Such examples show only some of the situations that an open system should manage working in a dynamic normative environment. Several other anomalous situations determining system deadlock can occur in implementing norm compliance. In this work, we address the following ones: (1) The first is The introduction into the system of an inconsistent norm. It means that the norm is self-contradictory because it contains a logical contradiction, namely, the conjunction of a statement S and its denied, not S. (2) The second is The presence of an antinomy. An antinomy designates a conflict of two norms that are mutually exclusive or that oppose one another. (3) The third is The run-time injection of norms incompatible with a system goal. This means that pursuing that goal always violates the prescribed norms.
In our approach, we manage these situations in dynamically changing environments, where conflicting situations among norms may change according to the particular execution context. For addressing this concern, we introduce some new definitions about conflicts and inconsistencies that are based on a representation of the execution context. The norm compliance approach is defined through algorithms that are based on these definitions. In the following section, the theoretical foundations of the approach are introduced.
5. Theoretical Foundations
This section formally introduces the theoretical foundations of the proposed approach. Firstly, the definitions of the state of the world, goals, and norms are introduced. Then, formal definitions about norm compliance and anomalous situations are presented. It is worth noting that such definitions imply a formalization of a sphere of knowledge within the system operates. In particular, to establish a relationship between the formal specification of goals and state of the world and the formal representation of the normative framework it is necessary to refer to the same semantic layer. This requirement can be satisfied by adopting a knowledge formalization based on proper domain ontology (it is out of the scope of the paper how to build the ontology. The reader may refer to the several approaches proposed in the literature).
5.1. State of the World, Goals and Norms
The state of the world represents a set of declarative information about events occurring within the environment and relations among events at a specific time. An event can be defined as the occurrence of some fact that can be perceived by or be communicated to the intelligent system. Events can be used to represent any information that can characterize the situation of an interacting user as well as a set of circumstances in which the intelligent system operates at a specific time.
Definition 1 (state of the world). Let be the set of concepts defining a domain. Let be a first-order logic defined on with a tautology and a logical contradiction, where an atomic formula is represented by a predicate applied to a tuple of terms and the predicate is a property of or relation between such terms that can be true or false. A state of the world in a given time t () is a subset of atomic formulae whose values are true at the time t:
Definition 1 is based on the close world hypothesis  that assumes all facts that are not in the state of the world are considered false. Resuming the previous example, a possible state of the world at a given time t could be represented asIt means that, at a given time t, the user has an entry visa for country_A and has received the confirmation of the flight reservation.
Definition 2 (goal). Let , , and be as previously introduced in Definition 1. Let and be formulae that may be composed of atomic formulae by means of logic connectives AND(), OR (), and NOT (). A Goal is a pair where (trigger condition) is a condition to evaluate over a state of the world when the goal may be actively pursued and (final state) is a condition to evaluate over a state of the world when it is eventually addressed: (i)a goal is active if (ii)a goal is addressed if
A Goal describes a desired state of affairs the actor wants to achieve. It is represented by a couple of elements named trigger condition and final state. The final state represents the desired state of affairs. A goal is activated when some conditions occur (i.e., the trigger conditions) and it is satisfied when a new state of the world contains the final state.
The previous goal (according to common goal-oriented methodologies, we give a name to a goal for referring to it ) could be represented as follows:
It means that will be actively pursued after receiving the entry_visa for country A. The goal is considered addressed in a new state of the world where the user is in country A.
Definition 3 (norm). Let , , and be as previously introduced in Definition 1. Let and be formulae composed of atomic formula by means of logic connectives AND(), OR (), and NOT (). Moreover, let , be the set of deontic operators. A Norm is defined by the elements of the following tuple:where identifies the field of reference of the norm; is the Role the norm refers to; is the Goal the norm refers to; is a formula expressing the set of actions and/or state of affairs that the norm disciplines; is a logic condition (to evaluate over a state of the world ) under which the norm is applicable; is the deontic operator applied to that the norm prescribes to the couple (:
An obligation imposes to obtain the state of affairs . A prohibition defines as a not-acceptable state of affairs. Finally, permissions do not have a restrictive role. We also introduce the concept of scope for relating a norm to a specific context. The scope also allows establishing some hierarchy among norms.
The previous norm stating that “It is prohibited that a person visits the country_A if he has visited a country_B” could be represented as follows:
Hereafter, we assume that given a norm ; the actor that is pursuing the goal plays the role .
Definition 4 (applicable norm). Let be a state of the world in a given time t. A norm is at time t if
Definition 4 establishes when in a given context a norm is workable. When the statement representing the applicability condition results in a tautology, the norm is applicable in each state of the world the system is.
Definition 5 (active norm). Let be a state of the world in a given time t and let be a not addressed goal. A norm is at time t if
Definition 5 establishes when a given norm may produce an effect on the system. In particular, it occurs when the norm is applicable and the goal is active (namely, the system intends to pursue that goal). For the previous example, is applicable for a user that has already visited country B, but it may produce an effect only if the user has an entry_visa for country A that triggers the system to try to pursue the goal.
5.2. Norm Compliance
The definition of normative compliance is based on the concept of inadmissible state of the world defined as follows.
Definition 6 (inadmissible state of the world). A state of the world at a given time tis an Inadmissible State of the World if are verified where
The first condition allows ensuring the nonretroactive effect of a prohibition norm. It disciplines the case where the state of affair regulated by a prohibition norm has occurred before the applicability of the norm. Therefore, if is a prohibition and before the norm came into force, the state of the world cannot be considered inadmissible. The second condition verifies if is contrasting with the deontic constraint the norm prescribes.
It is worth noting that because of may refer to a state of affairs then it might coincide with the desired state of the world (i.e., ). This means that the norm disciplines directly to the goal fulfillment. Conversely, when , the norm constrains the way to reach the final state of the world by pursuing the goal. In the following definition, we differentiated these two cases.
Definition 7 (norm compliance). Let us consider a norm and a goal . Let us consider a state of the world in a given time t in which is active and let be the state of the world in which is true. Pursuing the goal is compliant with the norm if is an admissible state of the world where
The first case allows ensuring that the final state of the world achieved by pursuing a goal does not contain any violations of the normative constraints. The second case allows establishing that the system moved along a path which satisfies the norm passing through various states of the worlds appropriately. Definition 7 is strictly correlated to practical reasoning of goal-oriented systems. Practical reasoning is reasoning directed towards actions; it is the process of figuring out what to do . It consists of two activities: deliberation, deciding what goals to achieve, and means-ends reasoning, determining how to meet these goals. The central aspect of goal deliberation is “How can the system deliberate on its goals to decide which ones shall be pursued?” . A goal-oriented system sees some of its goals merely as possible options. Goal deliberation has the task to decide which goals a system actively pursues, which ones it delays, and which ones it abandons. Conversely, means-ends reasoning are aimed at providing operationalization of goals. It is the process of deciding how to achieve a goal using the available means (e.g., actions, services, and resources). In our approach, a mean describes a particular trajectory in terms of the state of the world the system may intentionally use to address a given result. Thus the system knows its effect on the state of the world. The definition we introduce about norm compliance directly influences the process of goal deliberation. The first condition of Definition 7 has a direct impact on the choice of goals that can be pursued. A system can deliberate to pursue a goal based on run-time conditions by envisaging the normative effects of the goal. Conversely, means-ends reasoning is a process that allows choosing the appropriate ways to fulfill a deliberated goal. The second condition of norm compliance is implicitly related to this process. A system can determine the way to reach a goal by envisaging the normative effects of available means that it can choose.
5.3. Anomalous Situations
This section formalizes the types of exceptional situations that can occur working with normative propositions.
Definition 8 (inconsistent norm). A norm is if .
A norm is inconsistent when it contains a logical contradiction. This means that its applicability condition contains the conjunction of a statement S and its denied not S. For example, let us suppose a norm N: It is prohibited that a person enters in building site if he is unauthorized, and if he is without protection and he is authorized. This norm contains the conjunction of two contradictory statements (i.e., he is an unauthorized user and he is an authorized user). In this case, the norm will always be not applicable without effect on the system behaviour. Such a situation could occur during the definition of norms with a complex condition, or a writing error could determine it. In the previous example, the corrected norm could be as follows: It is prohibited that a person enters in building site if he is unauthorized or if he is without protection and he is authorized.
Definition 9 (incompatibility). Let be a not addressed goal. A norm is incompatible with if
An incompatibility exists between a norm and a goal when pursuing the goal always violates the prescribed norm.
Let us consider a norm and a goal ; the following incompatibility cases may arise.
Case A. , and ; then modifies the final state of as follows:
In this case, prohibits the state of affairs directly; as a consequence it forbids to pursue the goal in any way. In some cases, this type of norm could be used for inhibiting some system behaviours, but a run-time injection of a norm whose applicability condition is erroneously written in such a way to determine a tautology may cause an undesired system deadlock. For example, let us consider the following norm It is prohibited to go to country A. It directly constrains the achievement of the goal go to country A.
Case B. , and ; then modifies the final state of as follows:
In this case, indirectly constraints the achievement of the goal because forbids to be in a state that is necessary for the accomplishment of the goal. For example, let us consider a simple example to remain in the context of the previous case. The norm It is prohibited to be in motion highlights the impossibility to pursue the goal go to country A in compliance with the norm.
Definition 10 (deontic contradiction). Let be a state of the world at time t and let and be two norms where , , . Norms and are if
As previously said, an antinomy designates a conflict of two norms that are mutually exclusive or that oppose one another. Some norms can generate an antinomy under certain circumstances. In autonomous systems, such cases are not predetermined. Thus, systems have to be able to evaluate each particular situation at run-time. In the following, we discuss some possible scenarios that could occur during system execution that are related to the possible kinds of antinomy presented in Section 3.2.
Let and be two norms, and , where , , and :
Case C. If , and , the joint effect of and modifies as follows:
In the first case, the system is in a conflict situation because of the contemporaneous applicability of and . Thus, there is no way to be compliant with both norms; the system adopts the recovery criteria. In the other cases, the final state is constrained to be compliant with the applicable norm.
Case D. If , , , and , the joint effect of and modifies as follows:
In the first case, the system avoids a conflict situation pursuing in such a way that the final state of the world includes the state of affair expressed by . In other cases, and do not create a conflict.
Case E. If , , and , the joint effect of and modifies the final state of as follows:
In the first case, to be compliant with norms the system fulfills in such a way to avoid the state of affair expressed by . In the other cases, and do not create a conflict.
In particular, when and and the previous cases can be represented as follows.
Case F. If , , , and , the joint effect of and leads the final state of in a contradictory state of the world:
In this case, there is no way to be compliant with both norms; the system adopts the recovery criteria.
Case G. If , , , and , the joint effect of and modifies the final state of as follows:
This case does not have a real effect on the system. In any case, the system pursues .
Case H. If , , , and , the joint effect of and modifies the final state of as follows:
Also, in this case, there is no way to be compliant with both norms. Indeed, the system should not pursue for not violating any norms.
It is worth noting that a norm is logically contradictory when the contradiction concerns the logical conditions () under which the norm is applicable. On the contrary, we talk about deontically contradictory when the contradiction concerns the semantic meaning of the deontic operator () the norms apply to.
6. Algorithms for Norm Compliance
In this section, the algorithms that implement norm compliance for open and goal-directed systems are presented. For space concerns, they are placed at the end of the paper.
Algorithm 1 implements the normative compliance. It ensures that the system behaves in conformity with the normative environment it is operating. The triple of elements it works is a state of the world , a set of goal the system has to satisfy and finally a set of norms the system has to obey to comply with the normative environment. Both , , and may change during system execution. The state of the world may change due to some events that can occur or some actions that can be performed in the environment. The set of norms may change due to the introduction of new normative requirements that can come into force or due to the deletion of existing ones. The updating of the existing norms is regarded as new norms. For the scope of the paper, in the algorithm, we highlight the case of the introduction of new norms that is the most interesting case that can generate anomalous situations. Finally, the set of goals may change for satisfying new user requirements.
|Data: , ,|
|while system is running do|
|② foreach do|
|for to do|
|else if then|
|if then Case C|
|then Case E|
Step ① makes a preliminary check on new norms (if any) that are dynamically introduced in the system to avoid the presence of anomalies according to Definitions 8 and 9 (see Algorithm 2). Such step ensures that norms are consistent and there is no incompatibility with the goal they refer. Step ② is the core of the normative reasoning. The system is in the state of the world (). The norms may have effects on the system goals only if goals are not addressed yet. Thus, the set of applicable norms is filtered (i.e., ). For each goal the system has to satisfy, the set of associated norms () is initially processed to separate norms where by norms with because they produce a different effect on the system behaviour. As previously said, when norms act as constraints on the final state of the world, in particular, the compliance has to be ensured for obligations and prohibitions. It is worth noting that when , permissions are not taken into consideration. Permissions do not play a direct role in norm compliance because they cannot be violated. Conversely, when , norms act as constraints on system goal fulfillment. In particular, norms are promoters or inhibitors of the system in pursuing goals. In this case, permissions, obligations, and prohibitions have to be considered. As previously said, prohibitions do not allow the system to pursue a goal; obligations further constrain the conditions under which a goal can be pursued, and permissions conversely relax such conditions. Then, three different situations can occur.
|Data: a list of norms|
|Result: a list of consistent norms|
|① for to do|
|(see Case A)|
|(see Case B)|
The most simple one (step Ⓐ) is that there are no norms for . In such a case, there are no restrictions and the system can pursue the goal . The second situation (step Ⓑ) is a basic case in which the set of norms contains only one norm. In such a case, there are two possible occurrences:(i)Step : then if the norm is a permission/obligation and it is applicable, the system can fulfill that goal even if the original ; if the norm is a prohibition, it further constraints the goal activation and the system cannot pursue that goal as long as the norm is applicable (i.e., ).(ii)Step : then a new constrained final state is determined (see Algorithm 3) and the system tries to achieve such constrained final state.
|Data: an applicable norm , a final state|
|Result: a constrained final state|
The last situation (step Ⓒ) is the general case in which norms are more than one, and they can have different deontic operators. In this case, Algorithm 1 allows modifying goals, making them norm compliant according to a set of norms. Because norms are more than one, the presence of antinomy is possible. Thus, for each set of norms, antinomies are checked (see Algorithm 4). If the recovery is applied only for Case C. Conversely, if the recovery is applied for Case F and Case H. Hence, Algorithm 1 firstly works on norms where that modify goal’s trigger condition. Indeed, by encapsulating the condition expressed by the norms into the goal they refer, it is possible to modify the activation of that goal thus making it compliant with the norms.
|Data: a list of applicable norms , Type of Antinomy|
|Result: a list of consistent norms|
|② for to do|
|for to do|