Abstract

Proximity-awareness, that is, a mobile device being aware of the presence and capabilities of other devices in its proximity, can be beneficial in many frameworks that support spontaneous sharing of sensors available in advanced personal mobile devices (e.g., smartphones and tablets). Providing Quality of Service (QoS) guarantees in such frameworks is highly challenging because of the dynamic and loosely coupled nature of Mobile Ad Hoc Networks (MANETs). A framework called the SPontaneous Information and Resource sharing InfrasTructure (SPIRIT) has been presented before to address the challenges of proximity-aware mobile sensor sharing. SPIRIT allows mobile applications to share sensors as services in an automatic fashion by enabling the service providers and clients to express a variety of QoS specifications. This paper presents a set of mobility-aware QoS mechanisms for enabling the implementation of QoS specifications along the multihop paths between service providers and clients in highly mobile environments. Simulations in small-scale Mobile Ad Hoc Networks show SPIRIT’s ability to effectively control and manage traffic flows while maintaining desired QoS. The implications of the proposed QoS mechanisms extend beyond the scope of SPIRIT, as QoS provisioning is an important issue in many MANET frameworks and applications including Vehicular Ad Hoc Networks (VANETs).

1. Motivation

The huge growth in the number of advanced personal mobile devices such as smartphones and tablets equipped with sensors (e.g., location, speed, acceleration, image, light, pressure, and sound) presents the potential of building powerful distributed sensing and resource sharing systems. However, most mobile devices operate in isolation from one another; that is, they are not aware of the presence and capabilities of other devices in their proximity. There are numerous situations when proximity-awareness could benefit a mobile device, for example, allowing them to “borrow” sensor data or other resources (e.g., processing, network, and storage) from nearby peers. This is particularly useful in situations where mobile devices form a peer-to-peer Mobile Ad Hoc Network (MANET) to pursue a common goal (e.g., providing emergency response, collecting and analyzing large datasets during a scientific field trip) by sharing their resources in a spontaneous fashion. Specifically, emergency response has received increasing attention; for example, there is a growing recognition by governments and private institutions that MANET-based emergency response (ER) systems could prove to be highly beneficial to minimizing the fatalities of human lives after a natural disaster occurs [13]. One of the major in-field ER tasks is vital signs monitoring of multiple patients. Figure 1 shows an example of a proximity-aware mobile healthcare app, called BREathing rate MONitoring (BREMON) [4], which enables smartphone-based monitoring of the breathing activities of multiple patients at once at a disaster site. In BREMON, paramedics place a smartphone on the upper abdominal region (below the rib cage and on top of the thoracic diaphragm) of a patient who is lying on the ground. The acceleration data collected on the smartphone during the patient’s inhaling and exhaling activities is processed to calculate the number of Breaths Per Minute (BPM). The latest BPM data is periodically relayed to the smartphones used by all the nearby paramedics over a multihop MANET. If there is any sudden change in the BPM of a patient, the system alerts the paramedics on their smartphones. Thus paramedics can monitor and care for several patients at once, yet they still become alert to sudden changes in any particular patient’s physiological status.

Due to the dynamic nature of MANETs, an infrastructure is required that facilitates the processes of finding other devices, sharing sensor data among such devices, and maintaining network connectivity in mobile environments. The sensor resources shared by such proximity-aware applications pertain to services offered by the infrastructure. The service providers and clients interact by exchanging messages over the multihop wireless medium. A spontaneous service sharing environment, called SPontaneous Information and Resource sharing InfrasTructure (SPIRIT) has been previously introduced in [5, 6]. SPIRIT enables mobile devices to share sensor resources as services in an automatic and efficient fashion. It allows service providers to specify constraints (e.g., geographic area, duration, CPU load, and residual power) for sharing services: similarly, the clients are allowed to specify Quality of Service (QoS) requirements (e.g., reliable and persistent data delivery) for services they are seeking. SPIRIT adapts the firm guarantee model [7] for mobile scenarios and utilizes it to enable the clients to specify their QoS requirements (i.e., receiving any out consecutive messages is sufficient for meeting desired QoS level).

However, given the highly dynamic network topology of MANETs, supporting QoS in any form is extremely difficult in such environments. Some of the top challenges are as follows. (1) Transient path lifetime: as node mobility causes new routes to form and old routes to break frequently, it is highly unlikely that one particular multihop, end-to-end data delivery path would exist for the entirety of desired service duration. Hence the notion of providing end-to-end QoS guarantee for the entire service lifetime is not applicable. (2) Resource constraints: in SPIRIT-like frameworks, MANET nodes share resources with peers over multiple hops while working together towards a common goal. During multihop forwarding of service data messages, the resource-constrained MANET nodes (e.g., constrained by limited local resources such as battery, storage, and networking bandwidth) are incumbent to additional resource overhead.

This paper presents a set of four mobility-aware mechanisms for implementing the QoS specifications within the SPIRIT infrastructure. Specifically, these proposed QoS mechanisms account for the highly dynamic network topology, inherent unpredictability, and resource constraints of MANETs in a number of ways.(i)QoS Signaling. The service lifetime is divided into multiple discrete, subsequent time intervals called subscription interval (SI): at the beginning of each SI a new, stable, multihop data delivery path that will last for the SI duration is discovered (note that the new SI path can include subpath(s) from the previous SI path). The client QoS specifications are signaled during the path discovery so that the same QoS requirements are maintained along the new SI path. Such discretization of QoS signaling allows for QoS requirements to be continuously met over the desired service duration in highly mobile environments.(ii)Resource Policing. Resource policing in SPIRIT allows MANET nodes to impose and exercise local upper limits on the usage of their resources (i.e., bandwidth and storage) for services requested by other nodes.(iii)QoS Classification. SPIRIT services are classified into three types in terms of service execution time, that is, INSTANTANEOUS, RESERVE-AHEAD, and CONDITIONAL. Such classification enables clients to express a variety of QoS requirements and other nodes (e.g., service providers and intermediate nodes) to reduce their processing complexity, respectively. It also enables SPIRIT to achieve some semblance of priority-based flow control in the network.(iv)Admission Control. When a new multihop data delivery path is being explored during an SI, the admission control process at the potential future intermediate nodes determines whether current resource conditions are sufficient for offering the QoS requested by the service-seeking client, without any QoS violations. During admission control, parameters like available bandwidth, node availability based on predicted mobility, and storage availability are considered for making admission decisions. The process also uses a look-ahead strategy in order to prevent an INSTANTANEOUS/CONDITIONAL service being preempted when a previously admitted RESERVE-AHEAD service stream becomes active.

These four mobility-aware QoS mechanisms make it possible for QoS specifications to be effectively implemented within the framework while minimizing resource consumption in an efficient way. Simulation that utilizes a modified -based Weighted Fair Queue (WFQ) [8] demonstrates a real-life implementation opportunity for SPIRIT; the results show SPIRIT’s ability to effectively control and manage traffic flows while maintaining desired QoS. The implications of the proposed QoS mechanisms extend beyond the scope of SPIRIT, as QoS provisioning is an important issue in many MANET frameworks and applications including VANETs.

The rest of this paper is organized as follows: Section 2 compares SPIRIT with related work, and Section 3 provides a brief overview of the SPIRIT software architecture, followed by a detailed discussion of the SPIRIT service sharing model in Section 4. Next, Section 5 describes the QoS mechanisms in detail, while Section 6 discusses simulation results and a proof-of-concept implementation of SPIRIT, and finally, Section 7 concludes the paper.

Implementing QoS in mobile sensing systems is highly challenging because of the highly dynamic and loosely coupled nature of the system. There are two aspects of QoS in MANETs, that is, (1) the QoS specifications enabled by the system and (2) the QoS provisioning mechanisms.

2.1. QoS Specifications

The most popular and desired QoS specifications made in such systems with high node mobility are (1) reliable delivery of events, (2) timeliness, and (3) security. Reliable delivery of an event is crucial because of the nondeterministic behavior of the system. Related efforts [9, 10] propose reducing the nondeterministic behavior of the system by implementing policies such as persistence of events, durability of subscription, and event storage and retransmission. The SPIRIT infrastructure provides similar approaches to ensure reliable delivery of sensor data to mobile clients. The proximity-aware applications enabled by SPIRIT are similar to many real-time applications (e.g., multimedia and automotive control applications) in that it is acceptable to miss a few task deadlines occasionally. One of the popular solutions to the scheduling problem in real-time systems is the firm guarantee model [7]. In this paper, the model is incorporated into specifying the QoS requirements of the proximity-aware applications which are enabled by SPIRIT. SPIRIT further extends the model into model where the additional parameter signifies the level of data persistence required by the client: this extended model helps to increase reliability within the infrastructure as the clients can request the service messages that were lost in event of a data delivery failure.

Timeliness is another crucial issue for many real-time applications. The loosely coupled relationship between the service providers and clients makes it hard to make such a guarantee. Recent research has proposed a number of ways to provide timeliness such as using supervised machine learning [11] or packet scheduling [12] based approaches or autonomous adaptation and coordination of networking layers [13]. Timeliness is not currently supported within the SPIRIT infrastructure; however, it is possible to provide timeliness guarantees in SPIRIT by associating priorities to service messages so that most crucial messages get higher processing and transmission priority in the system.

Implementing security in mobile sensing systems is critical because it involves not only granting system access to authorized participants, but also enforcing trust between service providers and clients. A client may want to trust authenticity of the data/events received from the system. Since an event is in general delivered to multiple clients in these systems, the fact that the event traverses several infrastructure nodes during routing makes it tough to ensure trust relationship between the service providers and the clients. Dionysiou et al. [14] propose an idea to enforce trust between publishers and subscribers through a chain of trust relationships involving all the infrastructure nodes along the data dissemination path, while preserving the anonymity between the publishers and subscribers. However, in the most general case where one cannot assume the whole infrastructure to be trustworthy, the possibility of an event traveling through potentially malicious networks or nodes should be considered. Fiege et al. [15] propose a solution to this scenario by organizing groups of trust in scopes, that is, logical domains within the infrastructure. The organization in scopes limits the visibility of publishers, subscribers, events, and subscriptions within a single scope. The SPIRIT infrastructure is limited in that it assumes that all the participants are familiar nodes (like a team of rescue workers, colleagues, etc.) so that the intermediate nodes can be trusted not to corrupt the events, subscriptions, or some of the participants’ identities.

2.2. QoS Provisioning

Provisioning QoS in MANETs is much more difficult than in most other types of networks, because the wireless bandwidth is shared by adjacent nodes and the network topology changes very frequently as the nodes move. Most routing protocols for MANETs (e.g., AODV, DSR, and TORA [16]) are designed without explicitly considering the QoS capabilities of the routes they generate. QoS routing requires finding not only a route from a source to destination(s), but also a route that satisfies the end-to-end QoS requirements requested by the client. For this reason, QoS provisioning mechanisms are tightly integrated with the standard routing protocols in MANETs [17, 18]. QoS signaling is a widely deployed approach in the Integrated Services (IntServ) architecture [19]: in this approach, end-to-end messages are signaled using the Resource reSerVation Protocol (RSVP) [20] to enable QoS configurations and reserve necessary resources on the routers along the multihop data path. Similar to the RSVP protocol, SPIRIT allows the service-seeking clients to signal the service provider and intermediate nodes about the specific QoS required in the requested service stream. On the other hand, the Differentiated Services (DiffServ) architecture [21] reduces processing complexity and signaling overhead by aggregating service flows in different QoS classes. Similar to DiffServ, the QoS reservations in SPIRIT are coarsely classified in three categories (mentioned in Section 1) and are statically configured at the service provider and intermediate nodes. As the QoS requirements in SPIRIT are signaled once for each SI, the static QoS configurations only last for a short time period, thus reducing the negative impact of node mobility in comparison to DiffServ in which the static QoS configurations last for the entire lifetime of a service. Applications similar to the class of proximity-aware applications enabled by SPIRIT also employ QoS classification for addressing the challenge of providing QoS in a dynamic, heterogeneous environment. Such applications range from mobile healthcare applications [22, 23] to cooperative unmanned aerial vehicles (UAVs) applications [24, 25] and exhibit heterogeneous traffic characteristics ranging from low-rate, predictable traffic to bandwidth-hungry real-time applications. Typically, applications are classified into different service classes, based on their particular QoS requirements (e.g., latency constraints) or data criticality (e.g., emergency alarms versus normal data traffic). Then advanced scheduling and routing schemes are employed to provide different levels of priority to each service class.

In addition to QoS signaling and classification, admission control at the intermediate routers is a crucial aspect in QoS provisioning. Two of the highly critical parameters used by policy and admission control procedures are bandwidth and delay [26, 27]. However, additional parameters like mobility, data loss, and energy efficiency should also be considered [28]. SPIRIT uses parameters like available bandwidth, node availability based on predicted mobility, and storage availability (for data persistence) for making admission control decisions.

3. SPIRIT Architecture Overview

The architecture of SPIRIT follows a layered approach as shown in Figure 2. SPIRIT is a middleware that sits between the applications layer and the mobile operating system. As shown in Figure 2, SPIRIT consists of four layers which are discussed below.

SPIRIT API Layer. This layer provides a well-defined interface to the applications that want to share services spontaneously using the SPIRIT infrastructure. The key task of the SPIRIT API layer is to provide abstractions for the functionalities provided by the infrastructure, that is, to enable mobile devices to create, discover, subscribe, invoke, unsubscribe, and control services within the system.

SPIRIT Services Layer. This layer (illustrated by dashed lines in Figure 2) consists of three modules for implementing the service sharing functionalities provided by SPIRIT, namely, Service Description and MaTching (SDMT), Service Discovery and MaNagement (SDMN), and Quality of Service Management (QoSM). The SDMT module maps the user specifications (passed down to the infrastructure by the applications via the SPIRIT API) to abstract service specifications, using an ontology-based subscription language. This module also provides an ontology-enhanced search-and-match engine for finding the service providers that adhere to conditions requested by service-seeking clients. The SDMN module realizes the abstract ontology-based specifications into concrete messages of specific transmittable format. This module also provides an interaction protocol for implementing the communications between service providers and service-seeking clients. The QoSM module provides approaches for implementing end-to-end soft QoS within the infrastructure while handling node mobility in an efficient way. The mechanism implemented by the QoSM module is the focus of the current paper.

Courier Group Communications Layer. Courier uses the underlying unicast routing module to create an overlay routing network of service providers and subscribers. Courier addresses the challenges of group management posed by the highly dynamic and unpredictable character of mobile networks. It provides approaches for bandwidth efficient group management and adaptive data dissemination while focusing on handling mobility in an efficient way.

Peer-to-Peer Asymmetric Geographic Forwarding Layer. This bottommost layer is a unicast routing protocol called Asymmetric Geographic Forwarding (A-GF) [29], which is based on the well-known Geographic Forwarding (GF) principle. A-GF discovers asymmetric links in the network, evaluates them for stability (e.g., based on mobility), and uses them to obtain more efficient and shorter routes. A-GF also successfully identifies transient asymmetric links and ignores them to further improve the routing efficiency.

4. SPIRIT Service Sharing Model

In SPIRIT, the shared sensor resources pertain to services offered by the mobile devices. Each service is identified by a 7-tuple , inp_specs, out_specs, exec_type, periodic_interval, geographic_range, , specifying the name of the sensor resource to be shared as a service, the list of input parameters required to execute the service, the list of output objects and their specifications, the execution type of the service, the periodic time interval, the geographic area within which the service will be provided, and lastly the service lifetime in terms of the start and finish times of the service. A SPIRIT service could be of three types in terms of the time of execution: (1) INSTANTANEOUS: an INSTANTANEOUS service should begin executing as soon as the resources are available. The start time of this kind of service is defined to be zero in order to imply immediate execution; (2) RESERVE-AHEAD: a RESERVE-AHEAD service should be executed in future but the necessary resources should be reserved in advance; and (3) CONDITIONAL: a CONDITIONAL service should be executed whenever a particular condition is satisfied during the lifetime of service duration. The service start and finish time in this case define the absolute temporal boundary of the service execution.

4.1. QoS Specifications

QoS management on an end-to-end basis means that the applications can specify what quality they want and what cost they are willing to pay for it. QoS specifications deal with the definition of the requested QoS level and is typically offered through a specifications language or a comprehensive API. QoS could be specified along different dimensions of QoS categories such as performance, cost, or fault tolerance. In addition, a dimension can be defined as a qualitative or quantitative attribute of a category [30, 31].

In addition to the 7-tuple identification parameters described above, a SPIRIT service also includes two specifications parameters, namely, service_constraints and qos_requirements: the first one allows the service providers to specify service constraints in terms of a set of logical conditions which should be met prior to the service being executed. These constraints are associated with explicit effects which happen as a result of the service execution on the physical resources of the service provider device (e.g., cpu/memory usage, residual battery level). On the other hand, the service-seeking clients are allowed to specify their QoS requirements that are suitable for such highly mobile peer-to-peer environments. The loosely coupled and asynchronous nature of a mobile sensor sharing infrastructure like SPIRIT introduces a nondeterministic behavior in the system. In addition, node mobility dictates the lifetime of a multihop path between the service provider and the client. To that end, there is no single multihop end-to-end path that exists throughout the entire service lifetime. The service update messages that are released at different time instances during the service lifetime are most likely to travel on different paths. That is why it is not feasible to provide firm end-to-end QoS guarantees within such an infrastructure: such systems are only capable of providing “soft" QoS guarantees, that is, when the firm QoS requirement is relaxed by tolerating certain degrees of fluctuation in the service quality. SPIRIT introduces a flexible soft QoS based model which is suitable for the proximity-aware applications enabled by the infrastructure, as described below.

The proximity-aware applications are similar to many real-time applications (e.g., multimedia and automotive control applications) in that it is acceptable to miss a few task deadlines occasionally. One of the popular solutions to the scheduling problem in real-time systems is the firm guarantee model [7]. In this model, a periodic task is said to have an firm guarantee requirement if it is adequate to meet the deadlines of any out of consecutive instances of the task where and are positive integers and . The advantage of this guarantee model is that the applications can represent a wide range of tolerance to deadline misses by properly choosing the values of and . The model is adapted by SPIRIT to specify the QoS requirements of the proximity-aware applications. To that end, it is adequate for these applications if the client-side receives any out of consecutive update messages from a remote service provider. In addition, the service lifetime is implicitly divided into multiple discrete, subsequent time intervals: each such interval is called a subscription interval (SI). These intervals are of equal length which in turn is computed from the requested requirement. Dividing the service lifetime into discrete time intervals helps SPIRIT to distribute the single end-to-end QoS requirement into multiple subsequent end-to-end QoS requirements and thereby achieve a fine-grained control over QoS provisioning in a highly dynamic network (this will be described in detail in Section 4). This relaxation is implicit to the infrastructure: the requirements are signaled once for each SI in order to find and reserve available resources necessary for supporting QoS in a highly dynamic environment.

On the other hand, data delivery failure (e.g., failures due to node failure, disconnection from the network) in highly mobile environments is a very common phenomenon, and hence SPIRIT allows the service-seeking clients to request the service data to be persistent. Data persistence allows clients to receive the service update messages that were lost in event of a data delivery failure. The service providers are required to log service update messages in a persistent storage. When a client that failed/was disconnected rejoins the network and the durability of its subscription is still valid, the service provider resends the stored messages to the client. The SPIRIT service model allows a client to specify the level of data persistence by extending the requirement to requirement: the additional parameter indicates the number of SIs in the recent history for which the service provider is supposed to log all the service update messages for data persistence. Thus the SPIRIT service model provides a comprehensive description of the various aspects of service sharing in highly mobile environments.

4.2. Example Scenario

This section describes the working of the SPIRIT service sharing model, using a mobile healthcare app called BREMON (introduced in Section 1) as an example. It uses the SPIRIT infrastructure as an external library that provides support for discovery and sharing of acceleration data with simple QoS requirements in a multihop MANET. The shared data pertain to services provided by the infrastructure. SPIRIT runs as a background daemon and manages all the communications between the patient and paramedic devices, which serve as the service providers and service-seeking clients, respectively. These service providers and clients interact with the SPIRIT daemon via the set of function calls defined in the SPIRIT API. The constraints and conditions specified by the service providers and clients, respectively, are embedded in the API data objects which are then passed to the daemon as arguments of the function call. Each function call prompts the daemon to invoke routines to convert the SPIRIT API data objects into ontology-based XML format that will be universally interpretable within the infrastructure. A service is described using an ontology-based data type called Service Profile (SP). An SP instance describes a service with its input and output types, capabilities, and constraints and thus provides a detailed description of the service, needed by a matchmaker software agent to perform in-depth analysis whether the service meets certain requirements. In addition to representing the capabilities of a service, an SP instance is also used to express the QoS requirements of a service-seeking client, so that a matchmaker software agent has a convenient dual-purpose representation upon which to base its operations.

Algorithm 1 shows an example of BREMON client requirements. In this example, a paramedic named Bob is looking for shared acceleration data services within hundred meters of his current location. Bob’s interest in finding such shared services will last for the next two hours. The service execution should start instantaneously and a new service update message (containing the latest breathing activity data) is expected every 30 seconds. The QoS requirement of the desired service is , which means any 3 out of 5 consecutive service update messages must be received at the client node and the service provider should log all service update messages for the latest 4 subscription intervals.

<serviceProfile>
<sensorResource>Acceleration</sensorResource>
<serviceExecutionType>INSTANTANEOUS</serviceExecutionType>
<serviceAttributes>
  <updateInterval>  <Period>30</Period>  </updateInterval>
  <geographicRange>
   <Center>
    <LAT>+40.898333</LAT>  <LON>-86.453889</LON>
   </Center>
   <Radius>100</Radius>
  </geographicRange>
  <temporalRange>
   <beginsAt>12:15:00</beginsAt>  <endsAt>14:15:00</endsAt>
  </temporalRange>
  <qualityOfService>
   <m>3</m>  <k>5</k>  <l>4</l>
  </qualityOfService>
</serviceAttributes>
</serviceProfile>

Algorithm 2 shows an example of a BREMON service description. In this example, the service provider shares its acceleration data within a radius of 300 meters around the current location. The service lifetime spans a period that starts at current time (say 12:05 pm) and ends after 30 minutes (e.g., until the time the patient would be transported to a hospital), and it executes periodically every 10 seconds. The output of the service is the current BPM of the patient. The service executes only if the condition () is satisfied at the patient device during execution time. These abstract specifications and requirements in XML format are serialized into concrete messages of transmittable format, which are then exchanged over the wireless network between the service providers and clients.

<serviceProfile>
<sensorResource>Acceleration</sensorResource>
<serviceInpSpec>
  <ioSpec>
   <argvName>Location</argvName>
   <argvSpec>LAT</argvSpec>
   <argvSpec>LON</argvSpec>
   <argvSpec>TIME</argvSpec>
  </ioSpec>
</serviceInpSpec>
<serviceOutSpec>
  <ioSpec>
   <argvName>BreathRatePerMinute</argvName>
   <argvSpec>Hz60</argvSpec>
  </ioSpec>
</serviceOutSpec>
<serviceAttributes>
  <serviceInterval>  <Period>10</Period>  </serviceInterval>
  <geographicRange>
   <Center>  <LAT>+41.698333</LAT>  <LON>-86.233889</LON>  </Center>
   <Radius>300</Radius>
  </geographicRange>
  <temporalRange>
   <beginsAt>12:05:00</beginsAt>  <endsAt>12:35:00</endsAt>
  </temporalRange>
</serviceAttributes>
<serviceConstraints>
  <Condition> cpuUsage ≤ 40%  && powerLevel ≥ 60% </Condition>
</serviceConstraints>
</serviceProfile>

5. SPIRIT Quality of Service Mechanisms

SPIRIT incorporates QoS concepts from the widely used Integrated Services (IntServ) [19] and Differentiated Services (DiffServ) [21] architecture models, while adapting itself to suit the dynamic and asynchronous nature of highly mobile environments. SPIRIT delivers end-to-end QoS required by the proximity-aware applications by explicitly managing the network and device resources. SPIRIT allows the service-seeking clients to signal the service provider and intermediate router nodes about the specific QoS required in the requested service stream. Once the service provider and all intermediate nodes along a multihop path accept a new service stream and reserve resources necessary for meeting its QoS requirements, the service provider can initiate the service and begin transmitting the service update messages along the multihop data delivery path. Over the service lifetime, the QoS guarantee requires being maintained in face of network topology changes. SPIRIT provides a set of mobility-aware approaches to establish and maintain QoS in highly dynamic environments. This section describes the key QoS mechanisms in detail.

5.1. QoS Signaling

SPIRIT delivers end-to-end QoS required by the proximity-aware applications by explicitly managing the network and device resources as the topology changes with time. To that end, the QoS requirements in SPIRIT are periodically signaled during the service lifetime (once for each SI) in order to find and reserve available resources in a highly dynamic network. This is called temporal distribution of subscription. Each of the SIs for one particular service stream is of equal length which in turn is calculated from the requirement requested by the client. We denote the length of an SI as . Given a request for a new periodic service stream with periodic interval and requirement, we define the following temporal parameters:(i): the time instance when the requested service would start and the first service message would be released by the service provider.(ii): the time instance when the th service update message would be released by the service provider; = .(iii): the time instance when the th, that is, th, service update message would be released by the service provider; = (.(iv): the time instance when the th service update message would be released by the service provider; = .

In SPIRIT, is computed as follows:

Note that the length of SI is a critical design decision which provides a tradeoff between performance and resource overhead: an SI which is too short will lead to higher QoS in highly mobile environments but it will also result in increased communications overhead because of frequent service discovery and QoS signaling. On the other hand, a longer SI will have reduced communications overhead but it may result in poor QoS as service update messages could get dropped/lost because of unexpected topology changes in MANETs. In SPIRIT, the length of an SI is just sufficient to meet the QoS requirements requested by the client. To that end, the client’s current subscription to the service is valid for a period of length : in other words, the service provider and intermediate nodes will reserve resources until time instance . On the other hand, the relationship between the value of and is that the service provider is supposed to log all the service update messages from the latest number of SIs (including the current SI). In other words, during the th SI, denoted as , the service provider is supposed to log all the messages released during the subscription intervals ,  .

The client-side SPIRIT daemon renews its subscription at the end of each SI in order to reserve resources for the next SI. The subscription renewal in SPIRIT takes place in the form of a new service-path discovery: the QoS requirement of a service-path discovery is copied from the original service discovery request which was issued by the service-seeking client application. Like the original service discovery, the service-path discovery is broadcasting in nature. However, unlike the service discovery, the service-path discovery request has a specific destination, that is, the service provider application that is sharing the service with the client application. The broadcasting nature of the service-path discovery procedure results in multiple copies of the service discovery request (and response) messages arriving at the service provider (and client) node via multiple paths: this multipath attribute increases the likelihood of successful subscription renewal in MANETs where messages are often lost/dropped. Nevertheless, if no response messages are received at the end of the service-path discovery then the client-side SPIRIT daemon reattempts the subscription renewal up to three times. If all three reattempts fail then the client application is notified that the service stream has been disrupted. Given that the service-path discovery takes some nontrivial amount of time, it is highly critical that the subscription is renewed on time for the next SI so that there is no disruption in the ongoing service stream. For this reason, the client-side SPIRIT daemon initiates the subscription renewal procedure while the current SI has not yet expired: to that end, the subscription is renewed at the time instance of the current SI. The renewal interval (RI) is denoted as and its length is defined as follows:

From (1) and (2), we can see that .

5.2. Resource Policing

Multihop forwarding of periodic service streams in a spontaneous resource sharing environment like SPIRIT is entirely altruistic in that intermediate nodes incur processing and networking overhead in forwarding messages for “remote” service streams (by “remote” services we mean those services that the local node is not subscribed to). As a result, it is only natural that these intermediate nodes would want to limit the amount of resources being consumed by the remote service streams passing through that node. Resource policing is the procedure of regulating the amount of resources being consumed by the remote service streams: if a maximum limit is reached or exceeded, the policy manager denies a new service request.

To that end, SPIRIT allows each intermediate node to impose upper limits on the Maximum Average Bandwidth for All Streams () and Maximum Average Bandwidth per Single Stream (). The SPIRIT policy manager drops traffic of those remote service streams for which the average bandwidth consumption is higher than the defined limits. On the other hand, a client can request multiple remote service streams each with an estimated average bandwidth that is fairly low: these requests have a higher chance of being accepted than requests with higher values of estimated bandwidth. Policing is required for ensuring not only that no remote service stream exceeds the maximum limit but also that no single client can take over the entire pool of shared bandwidth. SPIRIT allows each intermediate node to specify Maximum Average Bandwidth per Single Client () and drops traffic of a remote service stream if the overall average bandwidth consumption by the client associated with that remote service stream exceeds the limit.

In addition to managing and policing the bandwidth consumption by remote service streams, the SPIRIT QoS Module (QoSM) also controls the amount of persistent storage consumed for offering data persistence to the periodic service streams. To that end, persistent data delivery will require the service provider to log the service update messages in their local storage. Given a service request with QoS requirement, the service provider is required to store all the messages during the latest number of subscription validity intervals. However, in order to reduce the storage overhead, the service provider only logs the latest messages, which is just sufficient to meet the QoS requirement. On the other hand, each intermediate node logs only the latest number of service update messages (this is justified because the same intermediate node may not participate in forwarding the periodic service stream in its next subscription validity interval). These messages are logged in a FIFO-style virtual queue on the persistent storage and each stored message is timestamped: when a new update message is released and the queue is full, the oldest message is discarded to make room for the new one. SPIRIT allows each service provider and intermediate node to specify parameters for limiting persistent storage consumption by remote service streams in a similar fashion like the bandwidth limits. To that end, each node can specify the following parameters, namely, Maximum Data Storage for All Streams (), Maximum Data Storage per Single Stream (), and Maximum Data Storage per Single Client (). The SPIRIT policy manager drops the traffic of a service stream if the overall storage consumption by the client associated with that remote service stream exceeds any of these limits. All these bandwidth and storage limit parameters are defined during setup time. In addition, SPIRIT allows the service providers/intermediate nodes to specify rules so that the values of these parameters change during runtime according to the current resource availability situation at these nodes.

5.3. QoS Classification

As previously described in Section 3, SPIRIT services could be classified into three types in terms of execution, that is, INSTANTANEOUS, RESERVE-AHEAD, and CONDITIONAL. This classification is similar to DiffServ networks in which QoS is often classified into two categories: book-ahead (BA) requests and INSTANTANEOUS Requests (IR). When an admitted BA request becomes active, some of the ongoing IR are dropped when the bandwidth is insufficient for supporting both IR and BA requests. Several look-ahead strategies [3234] have been proposed in recent literature to prevent the IR from being dropped. The admission control procedure in SPIRIT (described next in Section 5.4) incorporates the look-ahead strategy proposed by [32] in order to prevent an INSTANTANEOUS/CONDITIONAL service being preempted when a previously admitted RESERVE-AHEAD service stream becomes active. To that end, the admission control procedure looks up its RESERVE-AHEAD services list to determine whether the available resources will be sufficient for supporting a requested service stream for the duration of an SI.

5.4. Admission Control

Admission control is a validation procedure which performs a check at an intermediate or service provider node before a new service request is accepted: the purpose of the admission control test is to determine whether current resource conditions at the local node are sufficient for offering the QoS requested by the service-seeking client, without any QoS violations. Once the request is accepted, the SPIRIT infrastructure will assure that the client is served with its requested QoS parameters. Thus admission control is critical for (1) providing the required QoS to the currently supported service streams and (2) ensuring that the locally imposed resource policies are not violated. Additionally, in SPIRIT the service messages and updates from the service provider are forwarded to the clients over a multihop path by a number of mobile intermediate nodes. However, frequent topology changes impact the availability and utilization of resources: an intermediate node may not be available for an entire SI despite having all the physical resources (e.g., bandwidth) available to meet the QoS requirements of a service stream. The admission control process should be able to take this into account so that an ongoing service stream is not dropped when the intermediate node moves out of the service region. To address this challenge, SPIRIT proposes a mobility prediction model to guide the admission control process: the purpose of this is to predict the temporal availability of a node and reject those requests which are unlikely to be satisfied. Incorporating mobility-awareness in the admission control procedure allows SPIRIT to reduce the number of interruptions experienced by the service streams and thereby also increase the effective bandwidth utilization. Furthermore, the admission control process also performs a check whether the local node has available storage for meeting the data persistence requirement of the requested service stream.

The decision process of admission control is represented by a set of test conditions that the system evaluates in order to determine whether or not to accept the request from a service-seeking client. The values of the , , and parameters requested by the client impact the decision of the admission controller on each service provider and intermediate node. When a client requests a new service, the admission control process takes the request as input and calculates the amount of resources required by (1) the service streams already being supported and (2) the pending service request. If the sum of the two is not larger than the resource limits defined in Section 5.2, then the service request will be admitted to the system; otherwise, the request will be rejected. For each incoming new service request with requirement, the SPIRIT admission control process evaluates the following test conditions.

Average Bandwidth-Based Admission Control. Given the current bandwidth usage condition at the intermediate node, would it be able to accept the new service request with requirement without violating the QoS of ongoing and reserved-ahead streams?

SPIRIT requires each intermediate node to maintain the remote services passing through that node into two lists: (1) a list of currently active services and (2) a list of RESERVE-AHEAD services. Before making a commitment to meet the requirement requested by a client, an intermediate node is required to make sure that it will be able to provide the minimum average bandwidth required to forward any out of consecutive service messages from that remote service stream. We denote the required minimum average bandwidth of a periodic service stream , which is requested by client with requirement, as . The value of is defined in (3): in this equation, and indicate the periodic interval and the size of the shared sensor information of the remote service stream, respectively:

Let us assume an intermediate node has and number of remote service streams in its currently active and RESERVE-AHEAD services lists, respectively. Next, node receives a request from a client with requirement. The admission control condition differs depending on the execution type of the requested service, as described below.

(1) Average bandwidth-based admission control for INSTANTANEOUS service: If the request is for an INSTANTANEOUS service, the average bandwidth-based admission control test performs look-up on both the currently active and RESERVE-AHEAD services lists and checks whether the available average bandwidth is sufficient for the service request. If node were to accept the service request, then the resource usage-based admission control test at node would require that all the three conditions in (4), (5), and (6) are satisfied. Equation (4) states that the sum of (1) the overall minimum average bandwidth of the currently active remote services , (2) the overall minimum average bandwidth of the RESERVE-AHEAD services that will become active during the intended lifetime of the requested service , and (3) the required minimum average bandwidth of the requested service should be less than the allocated maximum average bandwidth for all remote service streams

Equation (5) states that the required minimum average bandwidth of the requested service should be less than the allocated maximum average bandwidth allocated per remote service stream:

Equation (6) states that the sum of (1) the overall minimum average bandwidth of all the currently active services () subscribed by , (2) the overall minimum average bandwidth of all the RESERVE-AHEAD services () subscribed by , and (3) the required minimum average bandwidth of the requested service should be less than the allocated maximum average bandwidth for all the remote service streams per single client. These three conditions together ensure that the allocated bandwidth for remote service streams is efficiently utilized without any QoS policy violations:

(2) Average bandwidth-based admission control for RESERVE-AHEAD service: If the request is for a RESERVE-AHEAD service, the average bandwidth-based admission control test performs look-up on the RESERVE-AHEAD services list and checks whether the available average bandwidth is sufficient for the service request. If node were to accept the service request, then the average bandwidth-based admission control test at node would require that all the three conditions in (7), (8), and (9) are satisfied. Equation (7) states that the sum of (1) the overall minimum average bandwidth of all the RESERVE-AHEAD services that will become active during the intended lifetime of the requested service and (2) the required minimum average bandwidth of the requested service should be less than the allocated maximum average bandwidth for all remote service streams. Equation (8) states that the required minimum average bandwidth of the requested service should be less than the allocated maximum average bandwidth allocated per remote service stream. Equation (9) states that the sum of (1) the overall minimum average bandwidth of all the RESERVE-AHEAD services () subscribed by and (2) the required minimum average bandwidth of the requested service should be less than the allocated maximum average bandwidth for all the remote service streams per single client. These three conditions together ensure that the allocated bandwidth for remote service streams is efficiently utilized without any QoS policy violations:

Node Mobility-Based Admission Control. Given the mobility of the intermediate node, would it be able to forward any out of consecutive service updates during service lifetime so that the new service stream meets its requirement?

Before making a commitment to meet the guarantee requested by a client, an intermediate node needs to make sure that it will be within the geographic boundary of the requested service until it could forward any packets of consecutive packets to a downstream node that takes the packets closer to the location of the client. The SPIRIT QoSM further relaxes the admission test criterion from any packets to the first packets (). Without this relaxed criterion, an intermediate node that would not be around to forward all packets will reject the service request. In fact, in highly mobile environments it is extremely difficult to find a node that will be around long enough to forward all packets. The relaxed criterion says that it is sufficient if an intermediate node could forward only the first packets (the parameter adds tolerance to the packet loss phenomenon which is inherent in such environments). Thus the relaxed criterion is expected to result in the increase in the likelihood of the service request with requirement being accepted at an intermediate node. Note that the prediction model used in the node mobility-based admission test is extended from the mobility prediction model used in Courier group communications algorithm proposed in [6]. To that end, this prediction model uses the location and velocity of a node to define a Node Forwarding Zone (NFZ) such that it covers a region capturing all possible locations of that node at a future time. For instance, the current location of node at time is denoted by . At some future time () it is possible to define a circular region centered at . From Figure 3, we can see that, during the time interval , cannot be anywhere else but inside the circular region centered at with radius ( is the recorded velocity of at time ). This circle is the Node Forwarding Zone of node during the time interval and is denoted as .

Let us assume that, at time , an intermediate node receives a request for a periodic service with requirement. The prediction model defines the following parameters for the node mobility-based admission control test (in addition to the parameters ,  , and defined previously in this section):(i): the time instance when the th service update message would be released by the service provider; = and = .(ii): the circular disk that captures all possible locations of node during the time interval .(iii): the circular disk that captures all possible locations of node during the time interval .(iv): the circular ring that captures all possible locations of node during the time interval .(v): the circular ring that captures all possible locations of node during the time interval .

If the service request is to be accepted with the requested requirement, then the node mobility-based admission control test requires to be completely inside the geographic boundary of the requested service so that node could forward at least the first packets of consecutive packets to a downstream node that takes the packets closer to the location of the client, before node moves out of the service region.

Data Persistence-Based Admission Control. Does the current node have sufficient storage to satisfy the data persistence requirement of the new service stream, without violating the QoS of ongoing and reserved-ahead streams?

Before making a commitment to meet the requirement requested by a service-seeking client , a service provider is required to make sure that its current storage situation would allow it to log the number of service update messages released during each of the latest number of subscription intervals (including the current interval). In other words, at the th subscription interval , the service provider is required to log all the latest messages released during the subscription intervals , . On the other hand, each intermediate node is required to log only the latest number of service update messages from the current subscription interval . We denote the required minimum data storage of a periodic service stream , which is requested by client with requirement, as . The value of is defined in (10) and (11) for the service provider and an intermediate node, respectively:

The test conditions for data persistence-based admission control are similar to the conditions for average bandwidth-based admission control and hence will be skipped in this paper for sake of brevity.

Admission Control Test Procedure. If a service request passes all three admission control tests (i.e., resource usage-based, node mobility-based, and data persistence-based), then the service is accepted at node with the requested requirement and the service discovery request (SDREQ) message is rebroadcasted in node ’s neighborhood. On the other hand, if the service request fails one or more of the admission control tests then the SPIRIT QoSM rejects the service request. Figure 4 shows the flow of the key QoS mechanisms at the client, intermediate, and service provider nodes over a multihop path of intermediate nodes. When the subscription renewal interval is up, the SPIRIT client signals its QoS requirements for the next SI via a new service-path discovery. The intermediate node receives the SDREQ message and forwards it to the local admission control process. If the QoS requested in the SDREQ message passes all the admission control tests then forwards the message to all those neighbors which are located closer to the service provider than itself. The same procedure is repeated by all the next intermediate nodes before the SDREQ message is ultimately received by the service provider node. When the service provider receives a new SDREQ message for the next SI of an ongoing service stream, it reserves resources as requested in the SDREQ message and forwards a QoS Guarantee Acknowledgment (QoSGACK) message over the newly discovered multihop service path in the reverse direction. As each intermediate node in the service path receives this QoSGACK message, it reserves resources in a similar way as the service provider and then forwards the message to the next intermediate node. The QoSGACK message is ultimately received by the client node and serves as a notification to the client that the infrastructure has reserved necessary resources for supporting its QoS requirements for the next SI. Finally, at the onset of the next SI, the service provider begins transmitting the service update messages over the multihop service path while meeting the requested QoS requirements.

6. Evaluation

In order to evaluate the performance of SPIRIT, we performed simulations to study the impact of complex network parameters. In addition to simulation, we also implemented SPIRIT as a proof-of-concept Android app and performed a small-scale experiment to see how accurately the simulation results match the results from a real system. As described later, the real experimental results exhibit similar trends as the simulation results, albeit on a smaller scale. The simulation and proof-of-concept app experimental setups and results are described below in detail.

6.1. Simulation

We investigated the performance of SPIRIT using the OPNET simulator [35].

6.1.1. Simulation Strategy and Mechanisms

In order to implement the QoS mechanisms for SPIRIT in OPNET, we have used a similar approach as proposed in [3638]. In this approach, resources (e.g., processor and buffer space) are preallocated for satisfying the QoS requirements of sensitive services. Additionally, queuing techniques are utilized for QoS provisioning as they result in predictable network performance by providing dedicated bandwidth, controlled jitter and latency, and improved packet loss characteristics.

As described above, SPIRIT adapts the firm guarantee model to specify the QoS requirements of the proximity-aware applications and further extends it to a relaxed requirement to enable the client to specify the level of data persistence with the additional parameter . On the other hand, there are two levels of QoS management schemes available in OPNET, that is, for (i) local and (ii) global QoS requirements. Previous work [39] has primarily focused on the global QoS requirement for developing firm requirement and utilized the Weight Fair Queuing (WFQ) [8] scheme. In order to implement the relaxed requirement in SPIRIT, we utilized a similar approach that considers both local and global QoS requirements. To that end, the parameters together represent the global requirement and the parameter represents the local requirement. The global requirement is used to guarantee packet arrival of any service update message from consecutive messages. In the simulation this is achieved by appropriate assignment of the WFQ weight value. The local requirement allows each client to request a persistent QoS by using the parameter which in turn is utilized in specifying the Maximum Average Bandwidth per Single Client at each intermediate node. In the simulation, we adjust the relative bandwidth of each queue while forwarding the data packets to their destinations.

While the WFQ part is not integral to the proposed QoS mechanisms in SPIRIT, the purpose of using it in the simulation is to show an implementation path for the QoS mechanisms in real networks. To that end, WFQ is widely used by network schedulers for data packet scheduling. It is a packet-by-packet based implementation of generalization of Fair Queuing (FQ); that is, WFQ allows specifying, for each flow, which fraction of the capacity will be given. This makes WFQ an attractive choice for an implementation path for SPIRIT QoS. Additionally, previous work [40] has already proved the value of firm guarantee based WFQ for real-time applications (e.g., multimedia); that is, it provides a well-defined delay guarantee for network flows which are tolerant of packet drops as defined by the requirement. Based on this previous work and the proposed SPIRIT service model based on a relaxed requirement, a modified -WFQ is a natural choice of demonstrating the real network implementation opportunities for SPIRIT.

In addition to the relaxed requirement, SPIRIT also performs admission control that considers node mobility and proximity for selecting intermediate nodes. In OPNET the proximity-awareness can be indicated by parameters like the received signal strength (path loss threshold dB), distance threshold, and channel match criteria. We use the distance threshold as the criteria for selecting next intermediate nodes for data packet forwarding.

6.1.2. Simulation Setup

We created a network with dimension of . We assume a random mobility pattern for all nodes; every node uses the GRP routing protocol. Every forwarding/router node in the network uses the WFQ (class-based) scheme with a buffer size of 100 Bytes to implement the method. Nodes can identify their neighbors within a range of 20 meters. Table 1 summarizes the node and the network configurations parameters.

Figure 5 shows the simulation scenario in which we employed a group of four servers and clients and one router to evaluate the performance of SPIRIT utilizing the WFQ queuing scheme. In this scenario, we evaluated the global QoS requirement by assigning the weight value (as a priority parameter) equal to the ratio. In other words, each service has a priority (i.e., the relative weight in the WFQ scheme) which is determined by the ratio. The router forwards data packets to the clients based on this weight value.

6.1.3. Simulation Results

Global Requirements. We studied the performance of the network in terms of three metrics, that is, delay, delay variation, and number of received packets for clients. In the first simulation scenario, we only consider the global QoS requirement, that is, requirement.

Figure 6 shows the average delay for three services with varying weight values ( = 10, 50, and 100). This figure shows that increasing the ratio results in a reduction of the average delay. Figure 7 shows the average delay variation for the same three services. This figure shows that increasing the relative weight value results in an increase of the average delay variation. We will show later in this section how SPIRIT addresses this issue by utilizing the relaxed requirement. Figure 8 presents the average packet reception by clients. This figure shows that increasing the relative weight results in an increase of the packet reception. It is noteworthy that no significant improvement in packet reception is achieved by increasing the weight value by more than . This phenomenon is also corroborated by Figure 7; that is, increasing the relative weight value results in an unfavorable impact on the average delay variation. Figure 9 shows that the network throughput increases with increasing value of relative weight (as corroborated by previous results for packet reception presented in Figure 8). The reason why increasing the relative weight value results in improved packet reception rate and throughput is that an increase in the relative weight translates to assignment of larger buffer spaces for the network services. However, an important parameter for satisfying the QoS requirement is the rate of packet forwarding from the buffer. This parameter is not considered in the global QoS requirement and we will show next how adding the local parameter in SPIRIT further improved the QoS.

Global and Local Requirements. In the next simulation scenario, we demonstrate how adding a third parameter helps improve the network performance in SPIRIT. We implemented the third parameter (i.e., local data persistence level/buffer size requirement) by adjusting the ratio of reserved bandwidth to the buffer size for each queue which in turn translates to configuring the router so that it can forward data packets with respect to the local QoS of nodes in the network. Note that although the simulation explicitly used a router for simplicity of implementation, in a typical MANET it is expected that all nodes are capable of assuming the role of a router and adjusting the packet forwarding parameter accordingly. Therefore, a node with global requirement can request higher QoS by assigning a higher reserved bandwidth to local buffer size requirement (i.e., ) ratio to the router.

Figures 1013 show the network performance utilizing the relaxed requirement. Figure 10 shows that increasing the reserved bandwidth for a service results in decrease of the average delay. Figures 11, 12, and 13 show similar positive impact of increasing reserved bandwidth on (reduced) average delay variation, (increased) packet reception, and (increased) packet throughput, respectively. Note that these figures show the effect of reserved bandwidth assignment when the global requirement is determined by ; any other value of yields similar results.

Scalability. As discussed above, SPIRIT effectively improves the QoS performance in comparison to simple firm guarantee approach in a small-scale network. On the other hand, scalability is a serious concern in MANETs. In order to study how SPIRIT performs in a large-scale network, we implemented a network of one server and a group of 32 clients that also act as routers. Figure 14 shows that SPIRIT offers no significant improvement in the QoS performance in large-scale networks.

Reliability. In order to evaluate the reliability performance of SPIRIT, we simulated a simple network including a server and a client and set of intermediate nodes to forward data packets. We utilized a constant ratio for requirement (i.e., ( ratio) and % of bandwidth) of the network and changed the number of intermediate nodes (from 1 to 3) to evaluate the routing signaling and reliability performance in SPIRIT. Figure 15 shows that the average number of packets dropped by intermediate nodes decreases with increasing number of intermediate nodes. However, it is noteworthy that while higher path density (i.e., higher number of intermediate nodes) improves routing reliability, it also increases the average signaling overhead between nodes. We demonstrated this effect in Figure 16 that shows the average received routing traffic in the network. Figure 17 presents the requirement success ratio, that is, the ratio of number of received packets which met their requirement to the number of sent packets. As seen in Figure 17, the success ratio is when is low (i.e., ), and it decreases as increases (i.e., when ).

Mobility Prediction. In order to evaluate the performance of the mobility prediction model utilized by SPIRIT admission control process, we selected a random mobility pattern for all intermediate nodes and changed the degree of prediction for temporal availability of intermediate nodes by varying the coverage area for admission control. Figures 18 and 19 show the average received traffic and average routing traffic, respectively. These figures show that utilizing a larger coverage area (i.e., increasing predicted temporal availability of intermediate nodes) results in increased received and routing traffic.

6.2. Proof-of-Concept Implementation

This section describes a proof-of-concept implementation of simple QoS provisioning within SPIRIT infrastructure. The BREMON app presented in Section 1 was implemented and evaluated on off-the-shelf Android smartphones. The underlying SPIRIT infrastructure uses the Android Bluetooth API for connecting the phones in a peer-to-peer ad hoc fashion: this ad hoc networking capability enables paramedics using BREMON to monitor patients in a postdisaster site where Wi-Fi/3G network availability may be inconsistent or disrupted. Furthermore, using Bluetooth enables BREMON to consume less energy than a Wi-Fi/3G network.

In order to evaluate the performance of BREMON, we performed experiments in a testbed of 5 Google Nexus One smartphones running Android 2.3 (Gingerbread) Operating System. Ten runs of experiments were conducted for each metric and the final results were taken as the average of the ten experiments. The duration of each experiment was 600 seconds. A two-hop network topology was created as shown in Figure 20. Phones A and B served as two service provider devices, phones D and E served as two client devices, and phone C served as the intermediate node connecting the service provider and the client devices in a peer-to-peer multihop fashion. There are two periodic service streams and which are flowing between the service provider and client pairs and , respectively. The service stream releases a new service update message every 1 second. The service stream releases a new service update message every 100 milliseconds. We varied the value of the ratio (i.e., the guarantee requirement) of both services in the range of 50–100% in steps of 10%. Using this setup we evaluated the performance of BREMON as described below.

Figure 21 presents the guarantee success rate, that is, the number of service update messages in each service stream which met their guarantee requirement. As shown in Figure 21, both service streams have 100%   guarantee success rates when the ratio is low: the success rate decreases as the value of the ratio increases. Note that the success rate of the service stream is slightly less than that of service stream ; this is attributed to the fact that in the periodic service update messages are released more frequently than in , so it is likely that more number of service update messages from service stream would be lost/dropped in the wireless medium under the same constraint as in service stream .

Figure 22 presents the communications overhead, that is, the total number of control packets sent/received at all nodes in each service stream. Figure 22 shows that the communications overhead increases with increasing value of ratio; that is, as the SPIRIT daemon running on the mobile devices adds more control packets to the network as a result of the increasing number of data loss and subsequent retransmission requests made by the clients using the data persistent SPIRIT API. On that note, the service stream incurs more communications overhead than service stream because of its higher service execution frequency and consequently shorter subscription intervals.

7. Conclusion and Future Work

Providing QoS in MANETs is a highly challenging problem. This paper presents a set of mechanisms for supporting soft QoS within a proximity-aware mobile service sharing infrastructure called SPIRIT. The importance for the proximity-aware applications to be able to specify and receive simple QoS such as reliable and persistent data delivery has been recognized and implemented in this paper. A real application scenario where off-the-shelf smartphones are used by paramedics to monitor the breathing activities of multiple patients at once at a disaster site is also discussed. The breathing activity data pertains to service provided by the infrastructure. SPIRIT runs as a background daemon on the patient and paramedic devices (which serve as the service providers and clients, resp.) and facilitates the processes of finding other devices, sharing sensor data, and specifying QoS requirements. The set of mobility-aware mechanisms presented in this paper (e.g., QoS signaling, QoS classification, admission control, and resource policing) enable the implementation of the QoS specifications provided by the applications; to that end, they help to continuously reestablish and maintain the required QoS along multiple short-lived, multihop paths between service providers and clients, while handling mobility and node resources in an efficient way. Simulation results indicate that considering the local QoS requirement can improve the network performance in terms of average delay, delay variation, throughput, and packet reception.

The proposed SPIRIT infrastructure is limited in that it only provides simple QoS guarantees such as reliable and persistent data delivery whereas there are additional critical QoS requirements such as timeliness and security that remain to be included. Future work will include provisioning of latency requirements and priorities for most crucial service requests. In addition, approaches for specifying QoS requirements as range of values [4144] will also be investigated: treating resource reservations as ranges will enable the flexibility required for operation in a dynamic environment. Future work will also look at augmenting the admission control procedure with adaptive approaches via support for dynamic feedback.

Competing Interests

The authors declare that they have no competing interests.