Abstract

Multimedia content adaptation has been proved to be an effective mechanism to mitigate the problem of devices and networks heterogeneity and constraints in pervasive computing environments. Moreover, it enables to deliver data taking into consideration the user's preferences and the context of his/her environment. In this paper, we present an algorithm for service composition and protocols for executing service composition plan. Both the algorithm and the protocols are implemented in our distributed content adaptation framework (DCAF) which provides a service-based content adaptation architecture. Finally, a performance evaluation of the algorithm and the protocols is presented.

1. Introduction

There is a huge amount of multimedia information being captured and produced for different multimedia applications, and the speed of generation is constantly increasing. While providing the right information to a user is already difficult for structured information, it is much harder in the case of large volume of multimedia information. This is further complicated with the emergence of new computing environments. For example, a user may want to access content on the network through a handheld device connected by wireless link or from a high-end desktop machine connected by broadband network. The content that can be presented on one device might not be necessarily viewable on another device unless some content transformation operations are applied [1].

An efficient content delivery system must be able to adapt the delivered content for every client in every situation in order to address the wide range of clients, minimal bandwidth requirement, and fast real-time delivery [2]. As a consequence, content adaptation is one of the research topics that have attracted a number of multimedia research works. Here, we focus on issues related to content adaptation in pervasive computing systems.

Several adaptation approaches have been developed to perform content adaptation for pervasive computing. These approaches are generally classified into: server-based [3, 4], client-based [5, 6], and proxy-based [79]. As server-based adaptation approach degrades the performance of the server, and client-based adaptation approach is very difficult and sometimes impossible due to the limited processing power of pervasive devices (e.g., smartphones, PDAs), most of existing adaptation systems implement a proxy-based approach. Furthermore, to alleviate the overload problem of content adaptation processing, distributed approaches were proposed such as Ninja [10], MARCH [11], DANAE [12], and DCAF [13].

Jannach and Leopold [14] proposed a server-side multimedia content adaptation framework which performs the content adaptation by composing adaptation services resident on the server itself. This approach results in computational load and resource consumption on the server so it decreases the performance of content delivery. However, in our architecture (DCAF), content adaptation is performed externally using Internet accessible adaptation services which enhance the performance of the system in terms of content delivery time, resources consumption, and network overhead.

While the DCAF architecture [13] provides a distributed adaptation mechanism, its centralized control manner of adaptation services execution impacts the overall performance of the system in delivering the adapted data to the user. In order to enhance the performance of the system, a decentralized service execution protocol is incorporated to the DCAF architecture. In this paper, we present the centralized and decentralized service execution protocols, and the result of the experiments that have been done to compare their performances.

The rest of the paper is organized as follows. In Section 2, we discuss related works. Section 3 presents an overview of the distributed content adaptation framework (DCAF). The multimedia adaptation graph generator (MAGG) algorithm is presented in Section 4. In Section 5, we present the composite service execution protocols. The results of the experiments which have been done to evaluate the performance of the MAGG algorithm and the composite service execution protocols are presented in Section 6. In Section 7, we discuss fault tolerance issues in DCAF architecture. Finally, in Section 8, we conclude the paper and highlight some future works.

As we have mentioned in Section 1, the existing adaptation frameworks are categorized into three groups: server-based approach, proxy-based approach, and client-side approach. In the following, we present each of these approaches.

In the case of server-based approach (e.g., [3, 4, 15]), the functionality of the traditional server is extended by adding content adaptation. In this approach, both static (offline) and dynamic (on-the-fly) content adaptations can be applied and better adaptation results could be achieved as it is close to the content; however clients experience performance degradation due to additional computational load and resource consumption on the server [16].

In proxy-based approach (e.g., [8, 16, 17]), a proxy, that is between the client and the server, acts as a transcoder for clients with similar network or device constraints. The proxy makes a request to the server on behalf of the client, intercepts the reply from the server, decides on and performs the adaptation, and then sends the transformed content back to the client. In this approach, there is no need of changing the existing clients and servers. The problem of proxy-based adaptation approaches is that most of them focus on a particular type of adaptation such as image transcoding, HTML to WML conversion, and so forth, and that they are application specific. In addition, if all adaptations are done at the proxy, it results in computational overload as some adaptations are computational intensive and this degrades the performance of information delivery like the server-based approach.

Client-based approach (e.g., [5, 6, 18]) can be done in two ways: transformation by the client device or selection of the best representation after receiving the response from the origin server. This approach provides a distributed solution for managing heterogeneity since all clients can locally decide and employ adaptations most appropriate to them. However, adaptations that can benefit a group of clients with similar request can be more efficiently implemented with server-based or proxy-based approaches. Furthermore, all of the clients may not be able to implement content adaptation techniques due to processor, memory resource constraint, and limited network bandwidth [19].

The above three approaches do not deal with the problem of content adaptation from a service perspective that can be commercialized and utilized by users, content providers, or other service providers (like Internet service providers). Introducing content adaptation as a service (service-based adaptation approach) distributes the activities and results in performance enhancement. It also opens new opportunities to service providers as additional revenue.

A service-based content adaptation approach is quite recent. There are very few works on distributed content ada-ptation mechanism. Nevertheless, we can cite: Ninja [10], MARCH [11], DANAE [12], and DCAF [13]. In ourprevious work [13], we proposed a service-based architecture (DCAF). In this architecture, the adaptation tools are developed externally by third party or service providers. While the architecture provides a distributed adaptation mechanism, the centralized control of the execution of these services impacts the overall performance of the system in delivering the adapted data to the user. In order to enhance the per-formance of the system, a decentralized service execution protocol is introduced. In this paper, we present this protocol and the result of the comparison done with the centralized service execution protocol.

3. Overview of the DCAF Architecture

3.1. Components of the DCAF Architecture

As displayed in Figure 1, the DCAF architecture is composed of six components. The description of these components is summarized as follows.

Content servers (CSs): they are standard data repositories such as web sites, databases, and media servers.

Content proxies (CPs): they provide access to content ser-vers, formulate user request to source format, manage and provide content description (meta-data).

Context profile repository (CPR): it stores the users’preferences and the device characteristics. Users can update and modify their profiles at any time. Dynamic data such as the user location and the network conditions are determined at request execution.

Adaptation service registry (ASR): it is like UniversalDescription, Discovery and Integration (UDDI) registry; it stores multimedia adaptation services description including the semantic information (e.g., the type of data the service handles) and allows for service look up.

Adaptation services proxies (ASPs): they host adaptation tools. In our framework, ASPs are implemented as Web Services.

Local proxies (LPs): they are access points to information delivery systems. They are in charge of retrieving and processing context profile (user, device, and network), decide the type and number of adaptation processes, discover appropriate adaptation services, and plan execution of the services.

3.2. Context, Content, and Service Descriptions

The decision of the adaptation process depends on thequality of information gathered from various sources. This information consists of context description, content description, and adaptation service descriptions.

(1) Context Description (Context Metadata)
It includes the device profile, the user profile, the network profile, and other dynamic context data like location, sensor information, and so forth. We have used the CSCP [20] standard to represent the context information. Figure 2 shows an example of partial context profile for a sample device.

(2) Content Description (Content Metadata)
The metadata contains both the information about the object like object’s title, description, language, and so forth, and feature data about the media itself including the media type, the file format, the file size, the media dimensions, the color information, the location, and so forth. For content description, the XML form of MPEG-7 description is used. Figure 3 shows an example of the description for an image data.

(3) Adaptation Service Description
It contains information about the service such as name, identifier, function, processing rate, processing rate is the amount of data processed per second. It is expressed in Kbytes per second, cost, and so forth. In order to describe adaptation services, we developed multimedia adaptation service ontology [21]. This ontology facilitates describing adaptation services semantically so that they can be discovered, selected, composed, and invoked automatically. Figure 4 presents an example of a description of audio to text adap-tation service using the described ontology.

4. Multimedia Adaptation Graph Generator (MAGG)

In a multimedia content adaptation framework like DCAF, the challenge is that there is no single complete software solution that can satisfy all adaptation needs. In order to solve this problem, adaptation services are composed to realize the required adaptation. Service composition is defined as the process of putting together atomic/basic services to perform complex tasks. For example, to transform English text into audio in French, we need the composition of a language translation service and a text to audio conversion service. Since an adaptation process can be carried out in a number of adaptation steps (adaptation tasks) and there could be several adaptation services that execute each adaptation task that leads to different service composition possibilities and makes services composition difficult. In order to solve this problem, we have developed an algorithm called a multimedia adaptation graph generator (MAGG) that can compose distributed multimedia adaptation services.

4.1. Service Composition Modelling: Definitions and Notations

Definition 1 (Media Object). A media object is a multimedia data item which can be a text, an image, an audio, or a video represented as where are media features or metadata.

Definition 2 (State). The state of a media object , denoted as (), is described by the metadata values. For example, for an image object the state holds the values for the format, the color, the height, the width, and so forth.
For example, = (bmp, 24 bits, 245 pixels, 300pixels).

Definition 3 (Adaptation Task). An adaptation task is anexpression of the form () where is a transformation and are parameters.
For example, ImageFormatConversion (imageIn, imageOut, oldFormat, newFormat), where(i) imageIn: image input file,(ii) imageOut: image output file,(iii) oldFormat: old file format,(iv) newFormat: new file format.

Definition 4 (Adaptation Service). An adaptation service is a service described in terms of inputs, outputs, preconditions, and effects. An adaptation service is represented as (R I O Pre Eff Q), where(i)R: an atomic process that realizes an adaptation task,(ii)I: input parameters of the process,(iii)O: output parameters of the process,(iv)Pre: preconditions of the process,(v)Eff: effects of the process,(vi)Q: quality criteria of the service.

Definition 5 (Operator (Plan Operator)). A plan operator is an expression of the form , where(i)h: an adaptation task realized by an adaptation service with input parameters and output parameters . is called the head of identification of the operator,(ii)Pre: represents the operator’s preconditions,(iii)Eff: represents the effect of executing the operator,(v): represents quality attributes (e.g., cost, processing rate, etc.).
Let be a state, be an adaptation task, and be a media object. Suppose that there is an operator o with head that realizes such that Pre of o is satisfied in S. Then, we say that o is applicable to t, and the new state is given by
Example: for the above given adaptation task, we can have an adaptation operator instance as follows.
Operator:ImageFormatConversionOperator (http://media-adaptation/imagefiles/image1, http://media-adaptation/imagefiles/image2, mpeg, bmp).(i) Input: http://media-adaptation/imagefiles/image1.(ii)Output: http://media-adaptation/imagefiles/image2.(iii)Precondition: hasFormat (http://media-adaptation/imagefiles/image1, mpeg).(iv) Effect: hasFormat (http://media-adaptation/imagefiles/image2, bmp).(v)Quality: (cost = 30 units, processing rate = 1500 kbyte/s).

Definition 6 (Adaptation Graph). An adaptation graph (, ) is a directed acyclic graph (DAG), where(i) is the set of nodes that represent the adaptation operators,(ii) is the set of edges that represent the possibleconnections between the adaptation operators.
The start node is a pseudo operator with effect (initial state) but no precondition.
The end node is a pseudo operator withprecondition (goal state) but no effect.

Remark 1. Let and , a link or an edge exists from to if the following condition is satisfied: where(i) denotes preconditions of ,(ii) denotes effects of .

Definition 7 (Adaptation Planning Problem). An adaptation planning problem is a four-tuple (), where is the initial state of the media object, is the goal state of the media object, is an adaptation task list, and is the adaptation operators. The result is a graph .

Definition 8 (Adaptation Path). An adaptation path is a path in the adaptation graph that connects the start node to the end node. It is represented as a list of the form , where and are the start and the end nodes and is an adaptation operator instance.
The MAGG algorithm consists of different procedures and functions. In Algorithm 1, we present only the structure of the algorithm. See [21] for complete listing of thealgorithm. This algorithm is used to construct a multimedia adaptation graph which gives all service composition possibilities that satisfy the required adaptation needs.

Algorithm: graph ( )
Input: Initial state , final state , adaptation task list and adaptation operators
Output: an adaptation graph
// Global constant
// Limit   maximum number of neutral operators allowed in a connection
// Global variables
// a set of nodes in a graph
// a set of edges in a graph
// start node
// end node
// a set of neutral operators available in the system
// Local variables
// a set of adaptation tasks
// an adaptation task element of
// a set of nodes for adaptation operators realizing an adaptation task
// a set of parent nodes
// a set containing the end node
Var
Begin
= ConstructStartNode( ) // constructs the start node from the initial state
= ConstructEndNode( ) // constructs the end node from the goal state
= ConstructNeutralOperators() // returns the list of the neutral operators available in // the system
// initialization
// initialization
// initialization
for each
Begin
Construct nodes from with operators realizing
// several operators can realize a task
Connect ( )
// after the process holds the value of
End // is processed
Connect ( ) // connects the end node
Return
End // graph

Since an adaptation task can be achieved by more than one service, the services are represented by operators in the graph, and each service has different QoS, choosing an appropriate service is an obvious requirement. Once the adaptation graph that consists of all possible compositions is generated, then the choice of the optimal adaptation path (also called the service composition plan) in the graph is done based on user specified QoS criteria [13]. The QoS is a multidimensional property which may include service response, service charge, quality of received data, and so forth. Here, the QoS represents only the service charge (cost) and waiting time. For a service s executing an adaptation task , the QoS is defined as follows: where(i): the adaptation service execution cost,(ii): the adaptation service execution time and the data transmission time. Let be a path in an adaptation graph, where is the number of services in the adaptation path. We define the QoS of the path , denoted as , as follows: where To aggregate the quality values, we define scaled qualities and as where and are the values of the maximum and the minimum costs, respectively. and are the values of the maximum and the minimum times, respectively.

Users can give their preferences on QoS by specifying weight values for each criterion. The score of a path with weighted values is calculated as in

where and represent the weight values assigned to the cost and the time, respectively, and

Let be the set of all possible paths in an adaptation graph, then the optimal path is the path with the maximum score value , where is defined as follows: ; .

Dijkstra’s algorithm [22] was used to find the optimal path, that is, the path in the graph with the maximum score value . More information about the optimal path selection using Dijkstra’s algorithm is found in [21].

5. Composite Service Execution Protocols

The execution of the composite service (also called the service composition plan) can be done in centralized or decentralized approaches. In the centralized approach (also called a star-based approach), the exchange of data between the services is done through the use of a broker as an intermediator [23, 24]. In the decentralized approach (also called a mesh-based approach), however, the exchange of data is done directly from one service to another one without the need to an intermediator [25]. In the following, we incorporate the two approaches with our architecture DCAF.

5.1. Centralized (Star-Based)

In the DCAF architecture, the local proxy acts as a broker. As presented in Figure 5, the local proxy sends the data to each adaptation service proxy (ASP) in the service composition plan and gets the result from the ASPs.

The data that the local proxy gets from the ASPs are partially adapted before it is sent to the last ASP in the service composition plan. The number of communications between the local proxy and the ASPs increases due to the number of the ASPs involved in performing the content adaptation process. Hence, the forward and backward communications between the local proxy and the ASPs incur additional overhead on the overall performance of data delivering to the user.

5.2. Decentralized (Mesh-Based)

As shown in Figure 6, in the decentralized service execution protocol, the ASPs communicate with each other and with the local proxy by exchanging a record message (RM) [25]. The RM contains(1) the addresses of the services that are in the optimal path of the graph,(2) the address of the local proxy,(3)the data to be adapted.

The exchange of records messages between the ASPs is done by using service execution and data forwarding (SEDF) modules. The SEDF module is in charge of executing local services to perform content adaptation and forwarding the record message (denoted ) containing the partially adapted data for subsequent adaptation. When the ASP receives , its SEDF module does the following.

If the first service S in is found locally, the SEDF executes S by assigning the data in as input of S. Then, it removes S from and puts the output of S in the data field of .

It repeats step 1 until the first service of is not found locally or the last service in is executed.

The modified record message is forwarded to the ASP that owns the first service in . If does not contain any service which means that the adaptation process is finished, the adapted data are sent to the local proxy.

We have compared the centralized and decentralized service execution protocols using two metrics: data transmission time and size of exchanged data. The data transmission time as well as the size of exchanged data is less for the decentralized protocol since there are less number of communications. For a scenario with three ASPs, we need six and four communications for the centralized and decentralized protocols, respectively. Therefore, the decentralized protocol gives better performance than the centralized one especially when the bandwidth is small and the number of ASPs is big.

6. Experimentations

Two major experiments were conducted. The first experiment is to study the behaviour of the graph generation alg-orithm with respect to the depth of the graph and thenumber of services per transformation. The second experiment is to measure the performance of the centralized and dec-entralized execution protocols. The experiments were performed on a 1.9 GHZ Pentium 4 with 256 MB RAM running Microsoft Windows 2000. In the following, we present the results of these experiments.

6.1. Graph Construction

As depicted in Figure 7, the relationship between graph construction time and the number of the adaptation tasks (the adaptation transformations) is linear. Moreover, the construction time progresses slowly with the number of services per transformation (). The graph construction time does not include the services execution time. It was also observed that the progress both for the depth (number of transformations) and the width (number of services per transformation) was almost constant with average increase of 40 milliseconds for each depth and 10 milliseconds for each 10 additional services. This implies that having several services per transformation does not affect much on the total construction time, while it provides the possibility to select the best service among the candidates.

The performance of the graph construction algorithm is reasonabl e even for the maximum depth of the graph (graph depth equals 10). For example, if we consider a graph with depth and width equal to 10 and 30, respectively, the construction time is only 335 milliseconds which is really acceptable. Nevertheless, most adaptation scenarios have 5 or 6 depth graph which is enough to realize any possible type of adaptation.

6.2. Service Composition Plan Execution

A simulation has been made to compare the performance of the composite service execution protocols in terms of data delivery time and exchanged data size. The data delivery time is calculated as the sum of services execution time and the data transmission time. The data delivery time and the exchanged data size are measured with respect to the file size, the bandwidth which is considered as uniform, and the number of services involved in the content adaptation process. Figure 8 presents a relationship between the data delivery time and the number of services for the two pro-tocols. The relationship is perceived to be linear for both protocols. As the number of services increases the performance gap between the protocols gets wide. This means, the decentralized protocols perform better with increasing the number of services in the service composition plan.

The data delivery time versus file size presented in Figure 9 and Figure 10 behaves in the same way as in Figure 8. The advantage of the decentralized service execution protocol becomes more visible when the data delivery time is analyzed with respect to the bandwidth. As illustrated in Figure 11, the performance gap between the two protocols is very significant when the bandwidth changes from 5.5 Mbits per second to 54 Mbits per second.

In addition, the analysis of the exchanged data size versus the number of services and the file size (see Figures 12 and 13) reflects similar behaviour to the data delivery time for the two protocols. Hence, the decentralized protocol performs better than the centralized one with respect to the data delivery time and the size of exchanged data.

To summarize, the decentralized execution protocol can enhance the performance of the DCAF architecture bydecreasing the network load and the data delivery time and make it more scalable.

7. Fault Tolerance in DCAF Architecture

Content adaptation process is accomplished successfully if there is no fault during services execution. However, the execution of the composite service can fail due to different causes such as network disruption and service discovery failure.

To tackle this problem, the local proxy can replace the failed service with an equivalent one. Identifying the failed service is straightforward for the centralized protocol as compared to the decentralized one since the local proxy controls the execution of each service in the centralized protocol. However, in the decentralized protocol, as discussed in [26], the failed service can be identified through the use of acknowledgment messages (ACKs). When an ASP executes an adaptation service S and forwards the partially adapted data to the next ASP, it sends an ACK message to the local proxy to inform that the service S has been executed. If the local proxy does not receive an ACK message from such ASP, it concludes that the composite service execution is failed. The implementation of the fault detection and recovery mechanism within DCAF architecture is in progress.

8. Conclusion and Future Work

In this paper, we have presented a multimedia adaptation graph generator (MAGG) algorithm and service composition plan execution protocols, that is, centralized and decentralized protocols. The algorithm constructs an adaptation graph which gives all service composition possibilities that satisfy the required adaptation needs. The selection of the optimal path (service composition plan) from the graph is done based on the QoS model which consists of the adaptation service execution cost, the adaptation service execution time, and the data transmission time.

The MAGG algorithm and the service execution protocols have been experimented in the DCAF architecture. The experiments on the graph construction algorithm (service composition algorithm) show that the graph construction time increases linearly as the number of adaptation tasks increases. The number of service per task, however, has not an important impact on the graph construction time. In addition, the experiments on the decentralized execution protocol show better performance than the centralized one, especially when the bandwidth is small, which is actually not surprising.

Successful content delivery does not only depend on effective execution of adaptation services but also on det-ection and recovery of failed adaptation services during services execution. For this purpose, we are planning to im-plement a fault detection and recovery mechanism within the DACH architecture in order to develop a fault tolerant content adaptation system.

Acknowledgment

The authors would like to thank the reviewers for theirvaluable comments that helped them improving the paper.