1. Introduction

This special issue is devoted to a well-focused subject: personalization of mobile multimedia broadcasting. Nevertheless, the topics of the papers published here demonstrate an amazing diversity. This phenomenon suggests that our subject is both highly relevant and experiencing a period of rapid change. Until recently broadcasting has been a well-established, relatively stable technology. However, new usage scenarios, mobile consumers together with mobile devices, and the desire for personalized content are providing new challenges. We currently have many more questions than answers.

We are confronted with a range of subtly different techniques, such as digital TV, IPTV, video-on-demand, Web-TV, live casts, mobile TV, peer-to-peer TV, and video-portals, which use different encoding/decoding standards, transmission protocols, streaming methods, quality-of-service levels, and interactivity features. In addition, they often require different bandwidth and different infrastructure.

In view of this diversity, it is sensible to take a fresh look at the basic concepts. The rest of this special issue is dedicated to presenting a nice selection of timely, ongoing research. Therefore, this editorial introduction starts (Sections 24) with a contextual overview authored by László Böszörményi. This overview concentrates on “the past and the future of this topic”—leaving the present, together with all of its unsolved questions, to be the subject of the rest of the papers.

An overview of the different contributions in this special issue closes this editorial introduction in Section 5.

At first glance, broadcasting and personalization seem to contradict one another. The idea of broadcasting is to transmit a message from an authority to everybody; the idea of personalization is to exchange messages between individuals. Broadcasting offers a high degree of sharing and a low level of privacy. Personalization, on the other hand, usually offers the opposite: privacy increases, but sharing decreases. There are a number of basic issues requiring very different, often contradictory treatment, and strategies. May be the most important of these issues are (1) authenticity and popularity, (2) personalization and privacy, (3) sharing, (4) interactivity, and (5) rights management.

2. A View at the Past

2.1. A Bit of Ancient (Mainly European) History

The idea of broadcasting might have its roots—as almost everything—in the attitude of the ancient Greeks, interpreting the thunderbolt as an expression of the anger of Zeus. This message was authentic—coming from the main god directly—and everybody could perceive it—actually had to perceive it. There was no way not to listen to Zeus’s “word” and thus, it did not leave much room for privacy. There was, however, room for many different interpretations.

Zeus used the air as a common, shared medium, making this kind of communication very efficient and reliable. If Zeus repeatedly created thunder, this only emphasized his anger. Thus he used a combination of acoustic and visual signals, ensuring that it was impossible not to listen to the acoustic expression of his anger, even with the eyes closed. The combination of these two modalities still plays an important role today.

The first sign of personalization lies in the diversity of Gods, thus allowing at least a choice among them. In the great war between Greeks and Trojans, the Greeks, especially Odysseus, followed the goddess Pallas Athena, whereas the Trojans were advised by Aphrodite. Communication channels were shared, but with a limited radius. Privacy was higher and realized though the relative freedom of selecting which god to follow.

In the ancient Jewish religion, God speaks often personally and in secret to selected persons. He can be heard but cannot be seen. He speaks a more sophisticated and understandable language than that of thunderbolts. Even interaction is often possible. A choice among gods is, however, not allowed as it is a monotheistic religion. Greek mythology also permitted some people to interact personally with gods. Interestingly, in these cases the corresponding god took the figure of a human. For example, Odysseus meets Pallas Athena in the form of a young swineherd and his son Telemachus meets the same goddess in the shape of King Menelaus. Incidentally, they both recognize the presence of Pallas Athena by the phenomenon that their partner appeared in a supernatural beauty. This suggests that a personal conversation with a god was seen as something beautiful, whereas an impersonal message, such as thunderbolt, was frightening. Regardless of this, the message was still very authentic, although personal. Although sharing of communication channels disappeared for the sake of interactivity, a certain level of sharing was still available, as some gods, such as Pallas Athena, had the admirable capability to appear at two different physical places at the same time—we would say a kind of virtual replication.

Greek gods were omnipresent, and therefore mobility was not a problem. Greek people were extremely mobile and could listen to their favorite god or goddess everywhere as, unlike some later people, they did not need a special church for this. Communication seemed to work without difficulties also among people. The Greeks (consisting of many small groups of people) and the Trojans were at war, but they never experienced difficulty in speaking with or understanding one another. Unfortunately, we do not know in which language they communicated.

2.2. Some Medieval History

The second major step in the history of broadcasting was presumably the invention of printing by Gutenberg in the fifteenth century. Previously, visual material on paper (or clay, etc.) had to be physically replicated and transported in order to be broadcasted. This was extremely expensive. Copying a book manually could take a small group of monks a year or more and bringing it to a different monastery often took several weeks. The printed book and especially the invention of the newspaper was a revolution in the technology of broadcasting. Compared to the ancient Greeks, we can observe a number of changes. Authenticity starts to decline. Although it was originally only the Holy Bible that was replicated, fairly soon a large number of publishers with different levels of authenticity appeared on the scene. Authenticity was step by step replaced by popularity. A thunderbolt had to be perceived regardless of whether people liked it or not. A book or a newspaper must be bought; it must be liked or “popular.” Whether they are still “true”—authentic—is another question. This obviously led to a certain degree of contradiction and competition. At the same time, the level of personalization starts to grow. People have a rich selection of choices. They also have the opportunity to become publishers, a process that is definitely easier than it was for Greeks to become a god (or at least a half-god). At the same time, the issue of right management emerges: authors of books and newspapers want to have some control over their publications. Previously they remained even anonymous—their only reward was in the eternity of god. This changed radically in the new age.

2.3. Radio, TV, Telephony

The next revolution in broadcasting was the appearance of analogous radio at the beginning of the 20th century and that of television somewhat later. These “classical” broadcast media resemble in many senses the communication paradigm of the Greek gods. They transmit a central, authentic message, essentially and continuously; the only “escape” for listeners is to change the channel or to switch off the receiver. Beyond channel selection, no interactivity is provided. Each channel has fixed bandwidth, a fragment of the spectrum. This channel is shared by all listeners of the same channel, making broadcasting highly efficient. The senders themselves share the “air” as a common medium, but try to avoid any kind of further sharing. Even though they often transmit more or less the “same” information, they try to do this in a different form—as competition and profit are the basic principles keeping them alive.

The first steps in switching from analog to digital technology tried to maintain the traditional view of authenticity, based on a limited number of highly trustable senders. As for sharing, new competitors have emerged, especially the Internet. Interestingly, for a while, wireless broadcasting was considered “old-fashioned” technology, as the new technology was wired. This situation has changed once again.

A further, very important aspect was the appearance of telephony roughly at the same time as analogous radio, the most important technological step in the development of personal communication. Railway, beginning with its modern form in the middle of the 19th century, can be regarded as similarly important; however, transportation is not primarily devoted to communication. Telephony allows people to communicate with each other synchronized in time while being released from the constraints of space. Analog telephony relies on circuit switching, giving their customers the illusion of having private connections while at the same time intensively sharing the same cabling system. Privacy is principally provided, but at that time everybody is aware of the fact that even private conversations may have uninvited listeners, not even necessarily on purpose but rather due to usual errors. Telephone conferences and broadcasting remain rather rare applications. It is interesting to note that the Internet has been strongly connected with “plain, old” telephony from the beginning through its use of the telephone system as a transportation medium.

Furthermore, extremely significant step is the appearance of wireless technology both in computer networking and telephony. The idea of “ubiquitous” computing emerged in the eighties, strongly related with pervasiveness and mobility.

This is—very roughly sketched—the situation in which ongoing research finds itself. A large number of papers have been published in recent years, addressing a lot of the issues of this big picture. Instead of trying to reflect on this diversity, we ask the following question: what is the future of combined broadcasting, personalization and mobility?

3. An Attempted Glimpse into the Future: the “French-Revolution” of Broadcasting

We assume that in the future virtually everybody may broadcast any kind of message at any time and can of course also receive any such message at any time, at any place, equipped with any kind of device. We could say that broadcasting will become democratic. That shatters the fundamentals of broadcasting, as broadcasting is—as we tried to show—basically undemocratic. Therefore, the development of personalization, mobilization, and enhanced flexibility is not just an option—but a necessity. In the near future, we can expect radically new usage patterns to arise, characterized by the following main features.

(1) Digital multimedia will be produced by many sources and injected into a fully distributed and multimodal environment at many different locations.(2) A huge “web” of multimedia data will be produced and consumed with various aims and requirements.(3) Beside entertainment, professional use of digital multimedia will grow considerably.(4) Production and consumption of multimedia data will be better integrated into the computing environment than is the case today. In such a world, production, search, access, delivery, processing, and presentation of multimedia data must become much more flexible; in many cases it must become “spontaneous.” While spontaneity is the enrichment in everyday life, it is extremely hard to apply to technology. Thus research will be confronted with new challenges. Let us consider a few example scenarios of this new multimedia world.

3.1. Some Usage Scenarios
3.1.1. Live Event with Professional and Amateur Producers and Consumers

A first example is a live event—such as the “Iron Man” competition, where a few thousand athletes are competing in swimming, biking, and running on a large but limited geographical area for several hours—followed by tens of thousands fans. A huge number of still and moving pictures are created by a variety of sources, including some professional camera teams, static surveillance cameras, and a great number of private people equipped with very heterogeneous photographic abilities. In addition, people with a wide range of interests would like to consume these pictures. Many of the consumers are watching just for fun, some others in order to track a certain participant and yet others, such as the event organizers, are watching to obtain a global view of the whole competition. How can these users find easily and exactly what they need, without being bothered by long sequences that they are not interested in? How can they get the required content without substantial delay and in exactly that quality they require (neither better nor worse)? Currently, there is no system that is able to cope with (or even approximately cope with) such a complex and spontaneous world.

3.1.2. Public Motorway Equipped with Sensors and Cameras

A company operating public motorways equipped with thousands of sensors and hundreds of cameras actually produces more broadcast material than a number of TV channels together. This material is obviously not of a trivial or entertaining nature; nobody wishes to watch traffic on motorways for days or even hours. What is needed, is a system which automatically identifies interesting events and offers them to the users (typically professional staff of the company, may be police and ambulance officers, or even public users planning their routes) to observe and evaluate. In many cases, the pictures of one single camera do not suffice, a group of cameras and related sensors should be identified, delivering relevant data for a certain, important event (traffic jams, accidents, etc.) or for enabling a global view on a major section (e.g., traffic in a certain area is quiet, whereas hectic at another, connected area). Current systems are still very far from providing such complex services.

3.2. Popularity Management, as a Compromise Between Sharing and Personalization

We have known for many years that popularity of videos essentially follows the laws of Zipf and Pareto. This means—albeit overly simplified—that roughly 20% of all videos, stored somewhere accessible over the Internet, will be downloaded or streamed more than once. The remaining 80% will remain essentially unused. What remains unreported is that the same laws hold for the scenes inside videos. That is, only portions amounting to 20% of all downloaded videos will be watched. Putting these two observations together, we come to the result that ca. 4% of all video material is watched by a second person (beyond the author), the rest is just there. However, in the resource management, this issue is hardly considered and even if, then at most on the level of the popularity of entire videos, but hardly on the level of individual scenes. Efforts are typically made to provide good resource management for the entire 100%—although instead what we need is effective management of the relevant 4%. Even if the resource management takes popularity into consideration on the level of entire videos and even if techniques, such as partial caching, do consider popularity on the level of scenes to a certain extent, a huge potential for savings, up to two orders of magnitude, still remains. The difficulty is of course obvious as follows: we usually do not know which 4% should be supported. Therefore, we need a new model of video delivery. Instead of viewing videos as sequential streams of data (resembling the video-tape paradigm), they should rather be regarded as direct-access media. Direct accessibility in several dimensions include that users should get exactly (1) what they need, (2) when they need, and (3) how (in what quality) they need.

This implies a transition to a flexible management in heterogeneous environments. The above observation may open a new chapter in combining sharing and personalization. Even though classical broadcasting (sending the same content to everybody) cannot work in future, even “democratic” systems can efficiently share resources by carefully tracking popularity. To explore this, let us take a look on such a possible, future delivery system. We make the following basic assumptions for a new model of video delivery.

3.2.1. Nonlinear Video Delivery

We assume that videos are rarely watched sequentially. In many usage scenarios, and especially professional situations, people want to quickly find certain scenes and avoid watching long sequences they are not interested in.

3.2.2. Two-Phase Delivery

We distinguish between video offering and video delivery. Offering should be fast, interactive and should provide information about the videos available within a certain context. During the offering stage, the underlying resource management system should be able to make preparations for an efficient delivery. We could use a restaurant as a metaphor. In a good restaurant, guests are served essentially in two main phases. In the first—the offering phase, they receive the menu and some appetizer very quickly. This enables them to make their favorite choices comfortably and also leaves time for the kitchen to be prepared which represents the second, the delivery phase—the main dish. In a video delivery system, the whole issue is much more complicated. We might have many “cooks” (video providers) and many guests and they may even change their roles (and places). Offering and delivering are overlapped activities all the time. Offering is push-based, that is, the service provider more or less “aggressively” announces meta-information about the available content. This must be very fast, as studies show that it is better to present to the clients something they did not explicitly require than to present nothing (or a rotating “hourglass”). “Real” content can be delivered pull-based (or in a hybrid way).

3.2.3. Video Composition/Decomposition

Data should be decomposed into units of “meaningful” size (how large meaningful is depends on the actual context) and can be composed under quality-of-service (QoS) constraints into continuous “movies.” For performance reasons, the decomposition may be performed in a lazy way, in order to avoid decomposing data which is never used or is only used for “traditional” streaming in its entirety

A traditional, long, continuous video is defined as a “special case” as a sequential composition of data units, under certain QoS constraints (e.g., 25 frame/sec, jitter <10 msec). The interesting point is that we may compose any data units in any order under arbitrary QoS constraints. The user becomes thus from a passive consumer to an active “composer.” This does not mean that the interactive human user always has to take the burden of the composition: predefined profiles of user-classes may serve as composition patterns. What is essential is that the system basically supports free and flexible composition. In real usage scenarios, full freedom must of course be reasonably restricted.

Decomposition and composition obviously come at a price. Using them only to support traditional usage patterns is hardly a good idea. However, if we assume that videos are rarely watched sequentially from the beginning to the end, rather certain important or “popular” parts are watched often, then we are confronted with new optimization possibilities. Popular parts may be replicated much more intensively than others. Moreover, the same data might be replicated in different qualities, following different replication strategies. Optimal data management is a most challenging research question.

3.3. Self-Organizing Delivery

The delivery system should strive for self-organization. Each node of the delivery system (no matter whether a server, a proxy, or a P2P client) can follow a simple, local goal-function. A goal is a state of affairs where an optimal utility value is reached and stabilized over a certain period of time. For example, a proxy could have the local goal of maximizing of its own throughput by building groups with other proxies sharing the same kinds of video segments. Even data units might have goals (e.g., to be replicated somewhere). The system has a required global behavior, expressed as a global goal. In an ideal self-organizing system, this global goal emerges as a result of the local behaviors. In practice, some parts of the global behavior might be controlled in a non-self-organizing manner, leading to a nonideally self-organizing system.

3.4. Trust Management as a Compromise between Authenticity and Popularity

Who will broadcast in the future? The answer is simple: virtually everybody. This leads immediately to the question of authenticity. How will we be able to decide on the value of the received information, if the sender is not necessarily trustable, may be not even known? This dilemma is of course already very well known, as demonstrated, for example, by discussions on the value of Wikipedia entries. This becomes more difficult if the information changes rapidly, as is the case in live events. This already occurs in some extreme cases, for example, in the case of natural catastrophes, where pictures and reports of eye witnesses are of high value, even though the technical or artistic quality is low. If pictures are taken at such an event, then they usually reach a trusted broadcaster via more or less “private” communication, who subsequently checks them as far as possible, before publishing them.

It would be, however, much better, if future broadcasting systems would offer well-defined services (1) to submit input messages “spontaneously,” (2) to check them for authenticity and to assign a certain level of trust, and even (3) to offer a way of rewarding the providers of such input. Authenticity or trust management must become an integral part of future services.

3.5. Metadata Management as a Compromise between Sending Everybody the Same Versus Sending Everybody Something Else

What will be broadcasted in the future? The answer is once again simple: virtually everything. The content will be multimodal including continuous data. Moreover, as the previous considerations show, it is not enough to deliver pure data; we need additional information, generally called metadata. Level of trust is—for this aspect—an example of metadata. Current scientific literature on multimedia delivery concentrates almost exclusively on the delivery of “real data.” If metadata is required, its availability is simply “assumed.” (A good example for this is the MPEG-7 metadata standard, leaving the delivery of metadata simply out of scope.) However, in dynamic scenarios, as described above, users have no chance to get the data they need without sophisticated metadata management. Much more than a simple electronic program guide (EPG) is needed. As long as one has the choice between two public TV channels, the selection is relatively easy. If a user has to choose among 200 channels, then the decision is harder. If a user has to choose between thousands of sources, some of which cannot be properly identified but useful, then a radically new technology is required. In Sections 3.2.2, we introduced the idea of a two-phase delivery, consisting of an offering and a delivery phase. This is a possible, partial solution for the general issue is that the metadata management must be an integral part of any future broadcasting system.

Also digital right management (DRM) belongs to the same category. The MPEG-21 standard offers the necessary tools for interoperable DRM. Why its acceptance is lagging behind the expectations is one of the questions which are harder to answer. Nevertheless, in the long term, we can assume with certainty that a business model will be generally accepted that enables consumers to access digital content as freely as possible and producers not to starve. There seems to be no alternative. Even if everybody has the possibility of becoming a broadcaster, no one is likely to agree with starvation.

4. Conclusions

Broadcasting is indeed in the state of a revolution. Our well-known and well-understood concepts have to be revisited.

(1) The idea of a kind of “divine” authority and authenticity will be replaced by the “democratic” notion of popularity, tightly coupled with integrated trust management.

(2) The traditional view of personalization based on the free selection between a small number of channels is definitely outdated. The user must get what she needs, when she needs it and how (in which quality) she needs it. Especially she should not be presented with content that she does not need. The traditional view of privacy, of being encapsulated in a kind of sand-box, will also disappear in the future. Future broadcast systems will need to be able to switch dynamically between private and public data and contexts.

(3) The sharing of resources based on sending everybody the same content is outdated. This can be efficiently replaced by a delivery model that shares information on the popularity of data and that subsequently favors popular data. This promises a good compromise between share-everything and share-nothing standpoints.

(4) Interactivity will become a central issue. Not only in the sense that consumers must receive very detailed metadata, which serves as a basis for making qualified selections, but also in the sense that everybody may change from being a consumer into a producer and vice versa.

(5) Rights management is probably not a workable concept and should be replaced by business model. Valid business models, enabling the highly flexible scenarios as described previously, without hurting the interests and rights of either producers or consumers must emerge soon.

5. Overview of the Contributions in this Special Issue

This special issue presents a selection of state-of-the-art research works in the domain of mobile multimedia broadcasting (MMB) with a focus on personalization.

In the first paper “Acceptance threshold: a bidimensional research method for user-oriented quality evaluation studies,” S. Jumisko-Pyykkö et al. present a survey of state-of-the-art methods of acceptation assessment based on subjective user feedback, and study their validity in the context of mobile television. Personalized multimedia applications need to make use of multimedia adaptation methods. Two papers of the special issue present contributions in this domain.

In the second paper “Adapting content delivery to limited resources and inferred user interest,” C. Plesca et al. present adaptation policies specifically designed for highly dynamic and partially or fully observable contexts typical of mobile environments with an application to film browsing service.

In the third paper “Efficient execution of service composition for content adaptation in pervasive computing,” Y. Fawaz et al. propose a method for executing multimedia documents adaptation plans based on composition of services.

In the fourth paper “Two-level automatic adaptation of a distributed user profile for personalized news content delivery,” the authors present a work that pertains to two major issues of the domain. The first one is the implementation of personalization features in the specific concept of mobility. The second one is the collecting and usage of user feedback in order to offer a better personalized service, which in this case is implemented using machine learning techniques. An important application domain for MMB services is the home multimedia environment, in which Universal Plug and Play Audio Visual (UPnP-AV) devices are often used.

In the fifth paper “Context-aware UPnP-AV services for adaptive home multimedia systems,” M. Papadogiorgaki1 et al. propose an enhancement of UPnP-AV that enables the adaptation of multimedia content based on contextual information. In order to offer optimal personalization features to their users, new MMB applications need to go beyond traditional adaptation methods based on parameters such as image size, color scale, bitrates, and so forth. by implementing finer-grained adaptation features. Two examples of such applications are presented in this issue.

In the sixth paper “Region-based watermarking of biometric images: case study in fingerprint images,” K. Zebbiche et al. propose a personalization method of biometric images using region-based watermarking. In the last paper “Extracting moods from songs and BBC programs based on emotional context,” M. K. Petersen and A. Butkus make an initial contribution toward the goal of emotion-based personalization by showing how moods can automatically be extracted from songs.

Harald Kosch
László Böszörményi
Günther Hölbling
David Coquil
Jörg Heuer