Abstract

Cost effective, smooth multimedia streaming to the remote customer through the distributed “video on demand” architecture is the most challenging research issue over the decade. The hierarchical system design is used for distributed network to satisfy more requesting users. The distributed hierarchical network system contains all the local and remote storage multimedia servers. The hierarchical network system is used to provide continuous availability of the data stream to the requesting customer. In this work, we propose a novel data stream that handles the methodology for reducing the connection failure and smooth multimedia stream delivery to the remote customer. The proposed session based single-user bandwidth requirement model presents the bandwidth requirement for any interactive session like pause, move slowly, rewind, skip some of the frame, and move fast with some constant number of frames. The proposed session based optimum storage finding algorithm reduces the search hop count towards the remote storage-data server. The modeling and simulation result shows the better impact over the distributed system architecture. This work presents the novel bandwidth requirement model at the interactive session and gives the trade-off in communication and storage costs for different system resource configurations.

1. Introduction

An interactive video on demand (VOD) system requires smooth data streaming for the user irrespective of geographic location to access on demand video, such as movies, electronic encyclopedia, interactive games, and educational resources from the distributed storage servers through a high-speed network. The number of customer’s requests increases exponentially in the local cluster domain. It brings a heavy load to the network system. The consequence is a high rate of customer request drop and huge bandwidth wastage. Next-Generation Network (NGN) provides multimedia services over broadband based networks, which supports high definition DVD quality multimedia data streaming content. The multimedia data streaming services are thus seen to be merging mainly in three areas such as computing, communication, and broadcasting. It has numerous advantages. More exploration for the large-scale deployment of the storage system is still needed. The distributed system hierarchy of multimedia storage has focused on analyzing the system architecture. The distributed system hierarchy of multimedia storage becomes important due to the services supported by the high-speed networks, multimedia storage servers, and distributed multimedia file systems. A customer will be able to request a multimedia stream from anywhere and at any time. In response to a customer’s request, local or remote storage system will deliver a high quality digitized multimedia data stream directly to the clients set-top-box over a local distributed network. The local cluster domain may store a complete or a partial set of multimedia data streams. As the overall VOD user population grows, newer local clusters are added to the distributed system, and the network capacities are sized to match the increased user population for that neighborhood. Clearly, it indicates the need for scalability of the distributed system hierarchy of a multimedia storage system. The remote site may be archival in nature, providing a permanent repository for all multimedia streams, or they may act as replicated servers such as mirrored sites. The remote sites may provide service to many user populations. The distributed system itself is a hierarchy of neighborhoods in the geographical region. If a request cannot be served from the local site, it may be directed to other remote sites. A storage container either local or remote has to reserve sufficient I/O and network bandwidths before accepting a customer’s request. We define a server channel as the server resource required to deliver a multimedia stream, while guaranteeing a client’s continuous playback. In general, the VOD service can be characterized as long lived session with a typical movie on demand service runs for 1 hour to 2 hours. Thus, sufficient storage and I/O bandwidth must be available, and we need a solution that efficiently utilizes the server and network resources. Class based admission control [1] was used in the “video on demand system” to get better performance. However, some level of planning and managing of the resource is required in the Internet-based [2, 3] video on demand to enhance the performance up to an optimum level. In fact, the network I/O bottleneck has been observed in many earlier systems, such as Network Project in Orlando [4] and Microsoft’s Tiger Video Fileserver [5]. The multicast facilities of modern communication network [68] offer an efficient means of one to many data transmissions. Multicast techniques can also significantly improve the VOD performance by reducing the required network bandwidth greatly, so that the overall network load is reduced. In other ways the multicasting alleviates the workload of the VOD server and improves the system throughput by batching requests [9, 10]. In this paper, we present a new methodology to reduce the traffic load in the “distributed video on demand system” for different types of service request in interactive session. The types of the service requests in interactive session are like fast forward, pause, stop, or move backward. Tewari and Kleinrock [11, 12] have designed a simple queueing model to shape the number of replicas corresponding to the request rate and the driven content. Zhou and Xu [13] have shown the target jointly maximizing the average encoding bit rate in content transfer over the network. The overall average number of content replicas that is minimizing the communication load imbalances the storage servers [14]. The brief content placement for P2P video on demand systems is presented by Suh et al. [15]. Tewari and Kleinrock [11, 12] have designed a simple queueing based model to shape the number of replicas corresponding to the request rate of the content driven. LRU (least recently used) and LRFU (least recently and frequently used) based methods worked efficiently and classified the batches of content with respect to the hit count. They efficiently update the peer cache and proxy server cache in the mesh type network for content replicas [9, 10]. The user initiative huge traffic load to the video on demand network generally follows the Zipf like distribution to analyze the traffic load and shape of the load inside the on demand system [68]. Li et al. [16] proposed different types of batching request model. This model integrates both the user activity and the batching model. In this model the user requests are batching and the effect of such batching is captured in a batching model. The user activity includes the various interactive modes like pause, move slowly, move fast, and so forth. The model is unable to describe the user initiative interactive operational mode. The video on demand system requires a uniform model that can be used to determine the requirements of network bandwidth at the user interactive session. The proper bandwidth requirement model at the interactive session gives the trade-off in communication and storage costs for different system resource configurations. This paper is structured as follows. Section 1 presents the brief literature survey for finding the problem in Introduction. We briefly represent the hierarchical architecture of the distributed system and the distributed database storage in Sections 2 and 3. Section 4 presents the session based multiuser model and single-user bandwidth requirement model. Section 5 illustrates the snap of the traffic flow model to this problem. Section 6 presents the parameter’s description of the simulation environment and performance evaluation of the system. Section 7 presented broadly the related work on the interactive session and the related domain. The conclusion remark is followed by the references at the end of the document.

2. Distributed Architecture

Large-scale VOD system requires the servers to be arranged in a distributed fashion in order to support a large number of concurrent streams. The system architecture is hierarchical, and the local server can handle the requests from specific geographic zone. The local server in the hierarchy takes the requests from the cluster switch; if they cannot handle, then request proceeds for next level. This architecture provides the cost efficiency, reliability, and scalability of servers. Generally, servers are either tree based shaped [9] or graph structured [8]. The graph structured system often offers good quality of service for handling the requests, but the management of request, videos, and streams is complicated in the distributed system. The tree-shaped system can easily manage requests, videos, and streams, but it offers poorer quality of service than the former. In order to evaluate the effectiveness of distribution strategies in such a hierarchy, the authors of [2, 17] investigate how to reduce storage and network costs while taking the customer’s behavior into account.

In the distributed system hierarchy of multimedia storage architecture, local proxy servers are installed at strategic locations in the network (closer to the clients). Remote clusters can communicate with the network’s distributed storage and with its local proxy servers immediately. Each local cluster server can support a number of customers connected to it through a cluster switch. The customers are connected to the distributed database servers through the cluster switch, which acts as an interface between the client cluster and the database servers. The proxies are used to distribute the storage contained functionality within the network by using the concept of local proxy storage according to Figure 1. If the user cannot be served by the local storage server for any reason, such as a blockage in the local network, or data is not available in the local storage, then the requesting user will be transported to the distributed multimedia storage. By locating the proxy server close to the user, it is expected that there will be significant reductions in the load on the system as a whole [6, 7]. Another advantage of the distributed local proxy video on demand system is that it can be expanded in a horizontal manner for system scalability and evolution. It can start from an initial two-level system (with a centralized multimedia server and one local proxy video server) to a system with as many local proxy servers as needed. The system is incorporated with the distributed multimedia storage system. The distributed system may utilize a lower than average network bandwidth and have a higher system reliability, but at the expense of significant amount of local storage systems.

3. Distributed Database Server

If the request is not served by the local storage then it will proceed to remote storage. The working structure of distributed storage is presented in Figure 2. The video on demand viewer sends the request simultaneously via the connected network. Here we present streams of request access from the particular video storage site. Initially, all requests from HTTP or HTTPs are forwarded to web cache memory. Now the web cache memory that has the content of the video stream is forwarded directly to the viewer. In case the web cache memory does not have the required content then the request is forwarded to application server in the list of application servers. The application web server is selected from the least loaded application server [4, 6]. The application web server searches the required video stream from a set of the well connected distributed database servers in Figure 2.

In the large-scale video on demand system, bandwidth optimization largely depends on the minimum hop counts in high demand stage or in a specific time or some of the days in a week [9]. The efficiency of the video on demand system depends on the “least recently and frequently used” (LRFU) cache updating policy at each server. LRFU policy is composed of the two dynamically growing cache memory blocks “least recently used” (LRU) and “least frequently used” (LFU) cache memory [9]. The performance of the audio video on demand system purely depends on the efficient cache replacement at the proxy server [10]. The initial submitted request pattern to the web proxy container is approximately presented by the Zif like probability distribution [4, 8].

4. Analytic Structure

The required bandwidth varies during the interactive session. Users use various modes of playing operation such as fast forward, skip, pause, and rewind. The required bandwidth continuously varies with the mode of user operation. In general, let be the initial (prior) probability to select a channel according to the binomial distribution, the total available bandwidth (i.e., in the trunk) at session and is channel required bandwidth at that session, . In real time (assuming) varies according to the user playing mode of operation within such that for the th type of real interactive session; is the bandwidth demand adjustment function. It initially, , is assigned to each channel, but in real time when a huge request is coming at a session, the channel is assigned with adjusted bandwidth or otherwise at least is available. If this is impossible, then the request is buffered to the queue (at the 1st queue of the proxy server or any intermediate router) [18]. For connection, setup is required and then bandwidth is assigned to the channel according to the user’s requirement and the availability of the bandwidth. User requirement or required bandwidth is maximized and subject to condition of availability of the bandwidth from the trunk. During the phase of connection startup to connection closed, the user behavior goes through a number of major interactive sessions in a random fashion. We denote the different mode of operations like fast forward (), play normally (), play slowly or move slowly (), rewind (), pause (), and so forth. So, for the case it seems to be according to the bandwidth requirement. Now, there is a possibility of more than one mutually exclusive outcome for each type of interactive operation during any session.

Let for be mutually exclusive and exhaustive outcome with the respective probabilities for user mode of operation (Exp i).

For example, is for fast forward, for play normal, for play slow, for rewind, and so forth. If occurs times, occurs    occurs times assuming that occurs times for .

Now, is the number of independent observations of any customer behavior in a session and then .

So for the th customer we get Here, is the number of permutations of the events.

So, .

Hence,This is true for “single customer” when the allocated bandwidth for that channel is ≥ .

So and is the total bandwidth of the trunk in ideal situation or scenario at session (Exp ii).

is the number of active links for corresponding users.

Hence every customer has an independent interactive mode of operational choice.

4.1. Session Based Multiuser Model

So, according to function (1) the distribution is

In an interactive session, different bandwidth is required for different service modes like pause, fast forward, rewind, play normally, or play slowly; that is, number of frame downloads by the user node varies, but the frame length is fixed for different interactive session. When the channel is assigned to the user, then at least minimum bandwidth is assigned to the user for any interactive session. Consider .

It follows the probability distribution functionIn a session, we get the mixed type distribution by (1), (2), and (3)Here, is the total number of users in the video on demand system. is the total number of interactive modes used by the particular customer in that session. is the particular type of modes like pause, move forward, rewind, and so forth and is the number of times in which modes are used by that user in that session.

4.2. Single-User Bandwidth Requirement Model

According to Expression (Exp i), is any type of interactive mode of operation for any individual user. The modes of user operation are like pause, rewind, fast forward, move slowly, and so forth. Let be the required bandwidth for particular interactive mode of operation in that session, ,  ,  

If, in a session, there are possible numbers of interactive modes, then the bandwidth requirement of the user mode of operation satisfies expression In any interactive mode of operation in a session, there are multiple subsessions.

So the occurrence of that particular interactive subsession isTo evaluate (7), we require finding out the volume of the solid sphere in dimension plane such that is the required bandwidth to continue and maintain the th type active interactive mode and is the very minimum bandwidth required for other passive interactive user modes of operation at the same session. So the reserved bits for other interactive service connectivity required .

So expression (8) becomes We can assume as the required reserved bits for other interactive user modes of operation except th type. In general, the -dimensional volume of Euclidean ball of radius in the -dimensional Euclidean space is defined as so for even and odd, we get and ; here is the number of interactive modes in a session.

Due to interactive mode of user operation at each session, the volume of the sphere is changing. Now we get inequality Expression (9) becomes Here we focused on finding the bandwidth requirement for th type of user service mode of operation.

Since (integration by parts), allocated bandwidth for that user holds true with the following expression:The interactive sessions are mutually independent for each customer; only one type of interactive session occurs, that is, th type like pause or watch normally or move forward.

If a viewer opens more than one window, for example, one window is paused and another window is using service required except paused, then the service request will be multiplexed to single stream through that dedicated channel that was allocated to that viewer.

Expression (11) becomes Here, is a -dimensional integrand function related to -dimension volume. If the user is active only in one interactive mode then, in special case, we can consider .

5. Traffic Flow Model

In distributed local proxy architecture, local proxy servers are installed at strategic locations in the network (closer to the clients). Remote viewer clusters can communicate with the networks of distributed primary storage servers and with its local proxy servers. Each local proxy server supports continuous streaming to the number of customers connected to it through a cluster switch. The customers are also connected to the distributed multimedia storage, that is, databases through the cluster switch and proxy server. This composition acts as an interface between the client cluster and the broadband network. The main idea of distributed video on demand (VOD) local storage (as shown in Figure 1) is to distribute the multimedia data functionality within the user cluster by using the concept of local proxy storage. If the streams cannot be served by the local storage due to reasons, such as blockage at the proxy to pull the required segment or chunk of video objects from the local multimedia storage, or the case of the unavailability of the video clips at the proxy server, then the request will be transported to the distributed database servers. To find the proxy server close to the user, it is expected that there will be significant reductions in the load on the system as a whole [6, 7]. Another advantage of the distributed architecture system is that it can be expanded in a horizontal manner for system scalability and evolution. It can start from an initial two-level system (with a centralized multimedia server and one local storage video server) to a system with as many local servers as needed. Compared with the centralized multimedia server system, the distributed system may utilize a lower than average network bandwidth. It has a higher system reliability, but at the expense of needing a significant amount of local storage to the architecture. The proposed model of the distributed video on demand architecture satisfies the basic requirements of the distributed networks. Let be the maximum number of customer’s requests, serviceable at the proxy server for a user centric cluster. Now is the disk bandwidth and is the client request playback rate including the interactive session at the local storage. Then the stream available at the proxy server will be served to the customer, if the condition still holds true, . The request admission control test at the proxy server determines whether the proxy server is able to serve maximum number of requests. The proposed methodology is used for proxy server. Proxy server has total bandwidth capacity with ports. Let us have classes of service requests for interactive operations; each class receives mean rate to the proxy server for to related to different service classes. Service requests are categories of fast forward with one frame, two frames, up to number of frames, pause, move backward, and so forth. Initially the proxy server port capacities are divided into a number of sections with each section having port capacity , for to , and if . The overall proxy port capacity is . Initially, each section is assigned with specific category of the service requests. At any interactive session of time the occupied size of the section of ports for to can be changed dynamically. It totally depended upon the viewer’s choice. So at any session of time , if the request for a particular service class service finds that the specific preassigned section is blocked, then it will search for free port in other sections of the ports. This is mutual shareable port strategy. Request for the section “” will be admitted to a section “” with probabilityFor example, . So, request generates in viewer cluster and arrives at the connected proxy server with mean Poisson rate. Here the maximum ports occupancy for class type of service request is for to . When a request for class “” type of service arrives, check whether . For simplicity we consider port occupancy at the session is such that for . If so, then a service request of “” type of service request is admitted to the corresponding “” section of port with probability . Now, So the request is admitted and updates the port occupancy class as at the session If , then checking continues into other port sections. This happens due to the sectional subnetwork between each proxy to the corresponding local storage in the “distributed video on demand system.” This forms a compact topology. To admit the request to the VOD system, checking is continued into the other service section of the port while satisfying the condition for finding free port. If free port is available in the other section then the request is admitted by the proxy server and updates the probability for according to Lemma 1 and expression (17).

5.1. Lemma

Lemma 1. A system will be compact if and only if the execution of any specific event fully depends mutually on other events in compound execution.

Proof. In a compact system, the resources are shareable such as channel, link, and port; that is, we prove the converse case; let, be mutually dependent events. Here we show that compound execution depends on the occurrence of each specific event , that is, for . Now,Hence this proves the system is compact.

So according to Lemma 1 and compactness in the local subnetwork of “distributed video on demand system” we get the following expression:By reversing the order we get Now update the occupancy class by for any particular according to Figure 3. Otherwise, the service request will not be admitted to the local system. If the request cannot be considered, then it is considered blocked for local networking system and the request is to be forwarded to the remote database server before discarding from the distributed system.

The request is forwarded to retrieve video content from the distributed data storage. The distributed data servers are presented in Figure 2. Let us consider as the probability that out of video storage servers are active. Active means that the storage server has the required encoded video file, and the end-to-end link is live throughout the streaming at the session . So, we get the expression . The event space is with the probability for . Let be the aggregate stationary bit rate from the distributed server according to Figure 2 at session . Now, is the equivalent capacity of networking with respect to bit streaming rate for the whole network according to Figure 1 at session . The value of is obtained by computing the smallest positive integer . is the least positive integer number of servers that have the required video objects or chunk of the video stream. If and are two events such that and At least number data storage is currently streaming, out of the data storage server at session ,for   So, is the selected number of minimum database servers from the number of database servers. There is at least number of servers that have the required segment. Now the search for video segments or stream generated a query and broadcast from the application server towards the database servers. So the optimization problem becomes concentrated to find the value of . In the next section, we present the video stream searching procedure to identify the storage location that is the database server IP address.

5.2. Session Based Query Matched Algorithm

Algorithm 1 runs at the application server.

(1) At // is the start time of the
     // session
(2) Var Queue_of_finite_Size : integer
(3) While  
(4)
(5) Initial node // set of data base storage
        // nodes
(6) Boolean matched_node_not_found True
(7) Queue of finite Size
(8) Begin
(9) push // first in first out
(10) while (match_node_not_found) do
(11)
(12)    Pop // pop is queue operation
(13) For each   from single hop neighbor of
(14)  
(15)  If (query string )
(16)  then
(17)    
(18)    match_node_not_found False
(19)    return node ID of
(20)    exit
(21)     
(22)  else
(23)   Push () // Push is a queue operation
(24)   // End for
(25) // End Inner While
(26)
(27) // End Outer While
(28) Send(“query matched node not found”)
         // To application Server
(29) End
(30) Stop

6. Performance Analyses

The simulation parameters are considered according to the real life streaming scenario in distributed architecture. The parameter’s values are summarized and presented in Table 1. For simplicity, we have considered that individual occupancy class of customers is contained in equal number of ports at every “sector of the proxy server.” A sector of size ports can be occupied by “occupancy class of customer.” The size of “occupancy class of customer” is during the multithreading operation evoked. The traffic arrival rate with different class of the service requests started from 1 Mb/s and ended at 10.5 Mb/s. The number of viewers for each client center is 20. Figures 47 present the performance of distributed video on demand system at the local network with respect to analytical methodology.

We have considered 1000 trials and mean of the data is presented in the flowing figures. Figure 4 presents the blockage in the “distributed video on demand system” with respect to the incoming “service requests” traffic flows from the 20 clusters towards the proxy server; here the first sector is blocked. The performance of the distributed system at the local network is evaluated with randomly generated probability set for the different occupancy class of the service requests. At the first sector, that is, and , where sector is blocked. Figure 5 presents the performance of the distributed VOD system with respect to the randomly generated probability set for , , and ; here sector is blocked.

The “request for service” traffic arrival rate to the proxy server varies within 1 Mb/second to 10.5 Mb/second with increment of 0.5 Mb/second. Figure 6 presents the performance of the distributed video on demand system at the proxy server, with respect to the randomly generated probability set . Now, , , and ; here sector is blocked. Similarly Figure 7 presents the simulation graph when the sector is blocked at the proxy server.

Figures 8 and 9 present the comparative study of system performance at the proxy server during the pull steam operation for different service requirement. The “pull data operation” draws raw data from every local storage to a connected proxy server in distributed architecture. The service requests are different types of service like fast forward, pause, and move backward. Figures 8 and 9 present the comparative graph at one proxy server. The pull operation retrieves multiplexed raw data stream from local storage to the connected proxy server. The plotted graphs are the system performance with and without proposed methodology of the service request handling. Both the figures have shown when the methodology of traffic handling is not used the blocking at the proxy server of “the distributed video on demand system” increases very rapidly. Figure 8 presents the comparative study of a blocked sector at the corresponding proxy server. Figure 9 presents the comparative study of a blocked sector at the corresponding proxy server. Both Figures 8 and 9 show that the blocking rate increases with an exponential growth without using the methodology.

When we use methodology, the blocking curve is below the threshold level. Clearly the “service of request” can be handled very efficiently by using the methodology at the proxy server for the distributed VOD system. On the other hand, when the load on the video on demand system decreases, this in turn will increase the overall system performance. The packet loss is clearly reduced efficiently by the methodology which enhances the system performance. In the 2nd part of simulation, we consider distributed database storage that is well connected to the application servers. The service request is submitted to the proxy server but cannot be served from the connected local server. The service request proceeded further towards the web cache of the distributed database server. The simulation uses the stream search algorithm run at the application server.

In the 2nd stage of simulation, we consider seven application servers, and every application server is well connected with three database storage servers according to Figure 2. Similarly, we consider six application servers, and each application server is connected with five data storage servers in the next phase of simulation. We have considered 90% confidence level for acceptance. From Figure 10, we have noticed that 42000 numbers of user service requests well responded from hop count 4. Figure 11 shows that 51000 numbers of user service requests responded from hop count 3. So the balanced storing distribution of the video stream among the database servers reduces the searching cost and hence reduces the delay and packet drop. If we consider a rough estimate to measure the performance metric for the proposed methodology as the sum of the “number of requests served,” that is, multiply the hop counts which gives , the expression gives the following performance metric score value according to Figure 10 that is 368000 and 317000 according to Figure 11. The score value directly depended on the number of computations to search the corresponding stream according to the proposed algorithm. The performance of the system increases as the score value decreases. In the third stage of simulation, we have considered the compact network where the data can be pulled for the user interactive session. The size of the request inside the network grows from 0 to 1000 in one minute, that is, 60 seconds presented in the horizontal axis. The vertical axis is the normalized traffic load with respect to maximum bandwidth on demand requirement, and the distributed network provides the amount at a session. We consider the unified demand parameter as at 90% confidence interval; according to (13) and expression (Exp ii), is the number of interactive modes used in a session and is the number of active links for corresponding users. So, depended upon the size of the network at that session. But, parameters and is the total bandwidth of the trunk provided in ideal situation or scenario at session .

Figures 12 and 13 present the traffic load inside the distributed network for various stages of the user driven interactive session. We observed from the figures that the traffic load increases by the increases over the number of users who operates the user driven interactive mode of operation at any session. The unified demand parameter value increases from 0.2 to 1.0 with the step size 0.2 for a particular session. Figure 13 presents the normalized traffic load at that interactive session, for a number of peer nodes, presented in the distributed system.

The peer node is the representation of local storage or proxy servers or distributed storage in distributed system.

Similar type of results is presented in Figure 12 for the unified demand parameter value starting from 0.05 to 0.1 with step size 0.02 for another interactive session with the same set of distributed network environments. In both the cases, we have considered the duration of the interactive session is 60 seconds and 1000 trials.

Interactive operation on multimedia systems represents a key technology. It is evolving from viewer demand and research prototypes to commercial deployments. Supporting truly and efficiently interactive multimedia services require smooth providing of personalized, dedicated channels or sessions for each user. For this particular set of services, the user requires complete control over the session and has the freedom to explore the depth of both live and stored archive. Development of these interactive multimedia services requires solving a diverse set of technical problems, many of which are tractable. According to Little and Venkatesh, in the interactive services, virtual VCR capabilities and interfaces to view parallel media streams are the most crucial requirement for video on demand system [19]. The earlier work presented by Paxson and Floyd [20], on Poisson processes for analytic simplicity, and a number of traffic studies have shown that packet interarrivals are not exponentially distributed. In this work, authors considered 24 wide-area traces for investigating a number of wide-area TCP arrival processes. Basically, traces are analyzed: the network, date, duration, and packet (amount). For the analysis purpose tracer considered TCP connection interarrivals, TELNET packet interarrivals, and FTPDATA connection arrivals. The Pareto distribution plays a role both in TELNET packet interarrivals and in the size of FTPDATA burst. Huang et al. [21] mainly focused on the single video stream pull approach. The single peer only redistributes the video stream that it is currently using. This work shows that 95 percentile server bandwidth costs would have been reserved for single peer assisted employment that was instead used. The streaming sessions trace records that were generated by the same Windows Media Player (WMP). Chat application does not consume huge amounts of traffic. This implies that catering to such users can be a promising way of attracting them, especially in low bandwidth environments. The chat traffic uses neither a well defined port nor a well defined protocol, but there exists a wide spectrum of protocols [22]. It is like IRC protocol used by IRC networks or their applet-based user interface equivalents: custom application protocols used by HTML Web chat systems running on top of HTTP. Diot and Gautier [23] describe the design, implementation, and evaluation of MiMaze. MiMaze is the multiplayers game with a distributed architecture using the multicast backbone for carrying IP multicast traffic over the Internet. It describes the design of dedicated transmission control mechanisms. MiMaze used distributed communication architecture based on the IP multicast protocol suite (RTP/UDP/IP). This work analyzed distributed interactive game on the multicast Internet. Zink et al. [24] analyze the content distribution in YouTube. It is realized and conducts a measurement study of YouTube traffic for large university campus network. Based on these measurements, author analyzed the duration and the data rate during streaming sessions. For monitoring the YouTube traffic, a client retrieves a video stream from YouTube. The multithreaded catcher obtains YouTube usage statistics for a particular campus network. Monitoring is done through a two-step process. In the first step, the collected data analyze signaling traffic, that is, HTTP message exchange between the client and the YouTube web server. In the second stage, video streaming is monitored. It is TCP headers of the video data sent from the CDN (content driven network) storage server. The traces are presented as follows: the trace length, user population, and content of local importance on the performance of different distribution architectures. The current tendency in Internet traffic is a shift from web traffic to P2P traffic. Many studies focus on the characterization of P2P traffic analysis [25, 26]. Kim et al. [27] propose a new method to identify current Internet traffic, which is a preliminary but important step toward traffic characterization. The traffic identification method is considered flow grouping in peer-to-peer (P2P) communication architecture. The video traffic is traces by the time-series graph based on the metrics traffic flows, packet, and byte, respectively. Broadband Internet service is popular in many parts of the world. Numbers of research studies have examined the characteristics of such traffic. Maier et al. [28] have considered a common assumption regarding residential traffic. The downstream dominates over the upstream; that is, most bytes are transferred to the local side. Indeed, the considered assumption has shaped and is ingrained in the bandwidth allocations of ADSL (asymmetric digital subscriber line) and cable broadband offerings. In the DSL session characteristics, Maier et al. study with a look at the behavior of the user’s DSL sessions (periods of connection to the ISP’s network). To trace the traffic, author used HTTP, Bit Torrent, eDonkey, SSL, NNTP, and RTSP protocols.

The performance of traffic monitoring and analysis systems highly depends on the number of flows as well as link utilization and the pattern of packet arrival. Kim et al. [29] examine the characteristics of recent Internet traffic from the perspective of flows. The authors used pipeline based real time traffic monitoring and analysis system architecture. In order to collect IP traffic trace data, authors considered “NG-MON” flow generator simulator and placed at the connection point of the network. The distribution of traffic flow duration and the distribution of the number of packets in flows are presented with the help of TCP and UDP protocols. The simulation result also presented the flow’s density. In this work, author used bytes and packets as metrics. In the case of peer-to-peer IPTV communities, Silverston et al. [30] have considered P2P IPTV traffic. It provides useful insights into both transport and packet levels statistical properties of P2P IPTV. It is useful to understand the impact on network nodes and links. The data traffic is collected by peers within the community watching the 2006 FIFA World Cup. The collected data is analyzed on mesh based topological system (PPLive, PPStream, SOPCast, and TVAnts). The tracer plots the (PPLive, PPStream, SOPCast, and TVAnts) traffic with respect to the number of videos downloaded from the neighboring peers with respect to time. The tracer plots the joint probability density function (PDF) of the interpacket time (IPT) and packet size (PS) of the download traffic. The IPT of each packet is the time that elapsed between that packet and the previous one of the same sessions. Loguinov and Radha [31] analyzed the dynamics of a live streaming experiment that has been conducted between a number of unicast dial-up clients, connecting to the Internet through access points, and a backbone video server. The clients streamed low-bit rate MPEG-4 video sequences from the server over paths with a number of distinct Internet routers. The authors have considered client-server based architecture for MPEG-4 streaming on the Internet. The server was fully multithreaded to ensure that the transmissions of chunk video were performed at the target IP. The bit rate of each streaming session provides a quick response to clients’ NACK requests. The streaming was implemented in bursts of packets (with the varying burst duration). The author considered six datasets, each collected from the different machine. The plots are mainly distribution of the number of end-to-end hops and average packet loss rates. The author analyzed the dynamics of a large number of TCP web sessions at a busy Internet server. Liang [32] systematically investigates the long term, online, time variable bit-rate (VBR) video traffic prediction. It is the key and complicated component for advanced predictive dynamic bandwidth control and frame allocation. Since in the Internet and other packet/cell switching broad-band networks (such as ATM), VBR video traffic will be a major part of the traffic produced by multimedia as sources, many researchers have focused on VBR video traffic prediction. Liang [32] traced the prediction of video frame by multiresolution learning neural network (NN) based model.

8. Conclusions

In this work, we have presented the three-phase simulation for distributed video on demand system for interactive session. The first stage of simulation according to the proposed methodology reduces the blockage at the proxy server for different types of service requests. It efficiently serves from the local server. If the requested video stream is not present at the local storage or the blockage is above some preassigned threshold then the service request proceeds further to distributed databases. The 2nd stage of the simulation results is presented to reduce the search hop counts for the required storage stream in “distributed video on demand system” by using the session based query search algorithm, which runs at the application server. In the third phase we have seen that the bandwidth requirement completely depended upon the number of interactive modes used by the user at any session. Increase in the number of users brings higher use of interactive modes that leads to the high bandwidth requirement at any session. The size of the network brings little impact to this scenario. The size of the network increases linearly, scalable with the number of peers’ nodes in the network. The unified demand parameters do not depend upon the size of the network or the number of peer nodes. Furthermore, the single-user bandwidth requirement model for interactive mode at any session does not depend upon the number of peer nodes present in the distributed network. The simulation result also reflects the same scenario.

Competing Interests

The authors Soumen Kanrar and Niranjan Kumar Mandal declare that there is no conflict of interests regarding the publication of the paper.

Acknowledgments

The authors are grateful to Sharmista Das Kanrar from Bishop Westcott, Ranchi, India.