Abstract

Pure Peer to Peer (P2P) network requires enhancing transportation of chunk video objects to the proxy server in the mesh network. The rapid growth of video on demand user brings congestion at the proxy server and on the overall network. The situation needs efficient content delivery procedure, to the video on demand viewer from the distributed storage. In general scenario, if the proxy server does not possess the required video stream or the chunk of that said video, then the same can be smoothly and rapidly streamed to the viewer. This paper has shown that multitier mesh shaped hybrid architecture composed of P2P and mesh architecture increase the number of requests served by the dynamic environment in comparison with the static environment. Optimized storage finding path search reduces the unnecessary query forward and hence increases the size of content delivery to the desired location.

1. Introduction

The efficient traffic flow of information is a key element in current technology and business environment. This traffic flow is controlled by a complex network architecture and supported communication infrastructure. High-speed network transport mechanism serves as enabling technologies for new classes of communication service such as multimedia and video on demand. The smooth transport of chunk video objects over the network is a challenging issue. Lots of literature address this issue with various concepts based on content driven network (CDN), pure Peer to Peer network, and hybrid as mix of “CDN and Peer to Peer network” [1]. How to design an efficient P2P VOD system with high utilization of bandwidth and low maintenance cost remains an open challenge [2]. The demand for streaming video over a low bit rate channel is increasing in a fast pace for many applications like the newscast, video conferencing, distance learning, video game, entertainment, and so forth. Some sort of traffic and congestion control, admission control, packet drop, needs to be considered due to the characteristic of the video traffic. However, a major bottleneck is the timely delivery of a large amount of data through a very limited bandwidth. The issue directly involves how efficiently the overlay network adapts to the dynamic topology. The video content is pushed from the origin storage server to the peer nodes in the tree-shaped overlay network. Conceptually tree based overlay network is the address followed by multitree streaming approach tree [3, 4]. Here the topology dynamically changes with the multiple subtrees in place of one single tree [4, 5]. However, the mesh based chunk delivery approach gives the better result for both video on demand and the live video streaming. The latter concepts give better performance as observed in PPlive and Livesky [6]. In mesh shaped overlay network, the peers pull “video objects” from the neighboring peer [7]. The pull operation is done by the buffer map between the peers. To achieve the quality of service, chunk of  “video object” delivery delays have to be handled efficiently. Quality of Experience (QoE) is defined in [8] as “the overall acceptability of an application as perceived subjectively by end user.” In general, HTTP streaming has a bad QoS (quality of service) but a good QoE (Quality of Experience), because HTTP used reliable mode of transportation. This so happened for QoS (quality of service). The considered parameters are packet loss rate and packet delay and so forth. So it is highly required to enhance HTTP/TCP streaming as a major technology for media transformation [9]. In adaptive bit streaming, the media file is fragmented into small segments or chunks of same duration with the request to some specific time unit [10]. During the adaptive bit streaming, each chunk is decoded independently enabling seamless transport from one quality to another when network conditions are being changed frequently. The playout of a chunk is finished. Video player can start playing the next chunk with a different quality. So, for the adaptive bit streaming, lossy compression is a better option for video on demand [11]. The delay occurs due to the search for a chunk of “video object.” On the other side, delay occurs due to the propagation delay and transmission delay from the neighboring peer. So the quality of service mainly depends on fast and correct path search. To find the desired video stream by using the mechanism of minimum hop counts towards the storage peer and to avoid the unnecessary query forwarding as well as to minimize the inter server gossiping during the chunk transfer [12, 13], the major routing protocols for wireless ad hoc networks have traditionally focused on finding paths with minimum hop count (MHC). However, such paths can include slow or lossy links, leading to poor throughput [14]. The energy efficient routing algorithm in wireless networks typically selects minimum cost multihop paths. But those energy aware routing algorithms select a path with large number of small distance hops in varied power transmission environment [15]. The quality of service depends on the aggregate stationary bit rate which exceeds the minimum threshold level from the viewer perspective [16]. This paper is structured as follows. Section 1 introduces the problem. The basic, video encryption mechanisms have been discussed in this section, followed with video objects formation. Section 2 presents the chunk of video object transfer over the hybrid architecture. Sections 3 and 4 present the required architecture and simulation environment of the problem with conclusion at the end.

(1) Video Traffic Burst in Network. The repeatedly occurring theme relating to traffic transportation in broad-band network and high-speed network is the traffic “burstiness.” It is exhibited by the key services such as compressed video and file transfer. Burstiness occurs in the course of traffic flow, due to the presence of several relatively short interarrival time sequences. The peak rate () is an important requirement parameter used during the connection setup time. The other major traffic descriptors are the mean burst period “,” the average bit rate “,” and utilization factor “” which is defined as . The peak rate () yields the mean bit rate and the variance of the bit rate. The mean of the burst period “” is used to describe how bursts occur from sender side. It is used to discriminate between different connection setups for streaming from different sources. During multiplexing, it has the same peak and mean bit rate but displays a different behavior. The multiplexer can be modeled as a single queue of length and servers each with fixed streaming rate [17, 18]. The buffer overflow probability for finite buffer size is . It can be obtained for given , , and . So the connection setup equivalent bandwidth for the multiplexed connection is the smallest value of , for which the buffer overflow probability is smaller than . The approximate upper bond of the equivalent bandwidth (for single isolated connection) depends upon the parameters (, , , , and ), where is the source capacity of streaming. The equivalent bandwidth of the multiplexed connection can be presented as . As it is considering the approximated upper bound for , the equivalent capacity is definitely overestimated. To eliminate the constraint, let us consider a stationary estimation derived from a bufferless fluid flow model. The fluid traffic model gives out with individual traffic units. Instead, it views traffic as a steam of fluid, characterized by a flow rate such as bits per second. The traffic count is replaced by a traffic volume. The bufferless fluid flow model for the stationary approximation exhibits the equivalent bandwidth . is selected to ensure that the aggregate stationary bit rate exceeds only with the probability smaller than . In general expression, it can be presented as , where can be determined from the stationary distribution of the number of active sources. can be modeled by the continuous time Markov chain as it potentially captures traffic burstiness, due to the presence of nonzero autocorrelation in the interarrival time sequence [19].

(2) Video Encryption Mechanisms. Video is the collection of still images. The collected set of still images is well orderly sequenced. The compressing technique is used to compress each individual still image of the collected set of images. Encoding technique is used to encode the image independently according to the sequence in the collected set of images. The Joint Photographic Experts Group (JPEG) format is used to compress the still images of the set of images. The sequences of still images are arranged in increasing order independently and individually. The MPEG-4 is the popular coding scheme that encodes the complete frames of video. The MPEG-4 is a collection of coding tools and maintains simple profiles. The most current implementation for temporal scalability is using H.263, MPEG-4, and secular temporal subband coding [20]. The MJPEG format is the motion JPEG video compressed and encoded using JPEG. Each video signal that carries information contains a definite amount of redundancy. So the video sequence contains redundancy. The aim of the video compression is to remove the redundancy from the video signal. Generally, two types of compression technique exist. One type is the lossless compression technique and the other type of technique is lossy compression technique. The lossless compression technique removes the statistical redundancy that is present in the video signal. When the transported video signal at the received end is reconstructed, it presents an identical copy of the original video. The current available technology supports compressing the video signal up to some modest level only. This is one of the greatest handicap legacies for the lossless compression technique, particularly for the storage and transported video signal over the network. This consumes extra space and bandwidth of the underlay network [2123]. In lossy compression, we can compress a huge number of video signals, but the reconstructed video signal at the received side is not identical to the original. The lossy compression meets a given bit rate for the storage and can maintain adaptive bit rate during the transmission of the video signal over the network or the internet. Different CODECs run into the storage as well as at the client sites. So, for adaptive bit streaming, lossy compression is a better option for on demand video streaming. The aim of video source coding is bit rate reduction for storage and transmission. The compression is done by removing the different types of redundancy, spatial, temporal, or frequent. Each image is usually divided into many blocks, each of size 8 pixel × 8 pixels. These 64 pixels are then transformed into frequency-domain representation by using what is called discrete cosine transform (DCT) [24, 25]. The frequency-domain transformation clearly separates out the low-frequency components from high-frequency components. Conceptually, the low-frequency components capture visually important components, whereas the high-frequency components capture visually fewer striking in components. The goal is to represent the low-frequency (or visually more important) coefficients with higher precision or with more bits and the high-frequency (or visually less important) coefficients with lower resolution or with fewer bits. Since the high-frequency coefficients are encoded with fewer bits, some information is lost during compression, and hence this is referred to as “lossy” compression. When inverse DCT (IDCT) is performed on the coefficients to reconstruct the image, it is not exactly the same as the original image but the difference between them is not perceptible to the human eye.

The current enhancements in 2D (two-dimensional) video coding are classified mainly into two categories, nonscalable and scalable, respectively [26]. The nonscalable coding, for example, is H.264/AVC (advanced video coding) and H.265/HEVC (high efficiency video coding). The scalable video coding, for example, is H.264/AVC with SVC (scalable video coding) extension. In nonscalable coding bit rate is not reduced, but in scalable coding bit rate is reduced. Generally, H.264/AVC is using an intracoding concept about the prediction of a block inside a frame. Here the neighboring block within the same frame is considered. H.264/AVC provides more flexibility with the blocks of samples and blocks of samples for transformation. H.264/AVC is using fundamentally advanced intercoding concept for predicting -frames from the set of highly eligible candidates of past and future -frames. H.264/AVC has 10 modes. The modes are eight angular modes, one DC mode, and one planer mode, respectively. The angular prediction interpolates from reference pixels at locations based on the angle. The DC constant value is an average of neighboring pixels (reference samples). Planer is the average of horizontal and vertical prediction. The new key features of H.264/AVC are enhanced motion compression, small block of transform coding, improved deblocking filter, and enhanced coding. H.265/HEVC is the successor to H.264. H.265 is meant to double the compression rates of H.264, allowing for the propagation of 4 K and 8 K content over existing delivery systems. H.265/HEVC used the concepts of frame partitioning “luminance pixels” into coding tree blocks of sample sizes 16 × 16, 32 × 32, and 64 × 64, respectively. It is flexible and smoothly partitioned into multiple variables sized coding blocks. H.265/HEVC is using updated intracoding procedure for merging of small partition of coding trees. A wide range of intraframe predication is used. H.265/HEVC is massively improving the flexibility stricture of transformation which matches the code tree block structure that is up to . H.265/HEVC improved the error resilience by using video parameter set (VPS). This concept is used for signaling essential syntax information for the decoding. H.265/HEVC has 35 modes. The modes are 33 angular modes, one DC mode, and one planer mode, respectively [26, 27].

(3) Video Object Formation. The DCT coefficients of each 8 × 8 pixel block are encoded with more bits for highly perceptible low-frequency components and fewer bits for less perceptible high-frequency components. This is achieved in two steps. First step is quantization, which eliminates perceptibly less significant information, and the second step is encoding, which minimizes the number of bits needed for representing the quantized DCT coefficients. Quantization is a technique by which real numbers are mapped into integers within a range, where the integer presents a level or quantum. The mapping is done by rounding a real number up to the nearest higher integer, so some information is lost during this process. At the end of quantization, each 8 × 8 block will be represented by a set of integers, many of which are zeroes because the high-frequency coefficients are usually small, and they end up being mapped to 0. The goal of encoding is to represent the coefficients using as few bits as possible. This is accomplished in two steps. Run length coding (RLC) is used to do the first level of compression. Variable length coding (VLC) does the next level of compression [12, 20]. After quantization, the majority of high-frequency DCT coefficients become zeroes. Run length coding takes advantage of this by coding the high-frequency DCT coefficients before coding the low-frequency DCT coefficients, so that the consecutive number of zeroes is maximized. This is accomplished by scanning the 8 × 8 matrices in a diagonal zigzag manner. Run length coding encodes consecutive identical coefficients by using two numbers. The first number is the “run” (the number that occurs consecutively), and the second number is “length” (the number of consecutive occurrences). Thus, if there is consecutive zeroes, instead of coding each zero separately, RLC will represent the string of zeroes as . After RLC, there will be a sequence of numbers and VLC encodes these numbers using a minimum number of bits [20]. The technique is to use the minimum number of bits for the most commonly occurring number and use more bits for less common numbers. Since a variable number of bits are used for coding, it is referred to as variable length coding [21, 22]. In video encoding, the order in which frames are coded is not the same as the order in which they are played. Thus, it is not necessarily true that the reference frame for the future frame is the one just before it in the playing sequence. In fact, the video encoder jumps ahead from the currently displayed frame and encodes a future frame and then jumps back to encode the next frame in display order. Sometimes, the video encoder uses two “reference” frames, the currently displayed frame and a future encoded frame to encode the next frame in the display sequence. In this case, the future encoded frame is referred to as -frame (“predicted” frame), and the frame encoded using two reference frames is referred to as -frame (“bidirectionally” predictive frame). However, there is a problem for this approach because an error in one of the frames would propagate through the encoding process for ever. To avoid this problem, the video encoders typically encode one video frame (periodically) using still-image techniques and such frames are referred to as “intraframes” or -frames [28]. A sequence of frames from one -frame to another -frame is referred to as a group of pictures (GOP). A GOP is best represented using a two-dimensional matrix. The order of encoding and the order of display are different for the frames in a GOP. It is transported over an IP network just like any other type of data transmission [5]. Typically, videos have three types of frames, -frames, -frames, and -frames, where -frames capture most of the information in a scene followed by -frames and -frames, which capture the “deltas” from the original frame (captured in -frame) resulting from motion. Finally, -, -, and -frames are packetized in the encoded format as a sequence of frames, which itself is video objects. When delivery is requested, the chunk delivery of “video object” is done from the storage server to the neighboring requested peer nodes. The MPEG-4 looks after the collection of video objects (VO) [24]. The video object is an entry that the end user is able to manipulate. It is an area, which can be of any shape and may exit for an arbitrary length of time. The instance in which the video object occurs is called the “Video Object Plane” (VOP). In the traditional view, we can say that the VOP is a single frame and a set of frames forming a VO. The VOP is in irregular shape and occupies only a part of a frame. The separate objects are coded independently with different resolutions and qualities. These coding schemes adopt a linear uniform sampling scheme disregarding the varying semantic importance of different frames or segments. The characteristics of video traffic depend on the quality of service (QoS) of video traffic.

(4) Arrangement of Peer Nodes. The hierarchy is created by distributing peer nodes to the different levels as illustrated in Figure 1. The levels are numbered sequentially with the highest level of the hierarchy being level zero (denoted by ). Each level is partitioned into a set of clusters. Each cluster is composed of peer nodes. The size of each cluster varies within to , where is a constant [29]. The cluster is forming with the set of peer nodes. Those are close to each other. Every cluster has one “cluster head.” The cluster head also logically belongs to the lower level; that is, a cluster head of level logically belongs to level . The cluster head has the minimum distance (with respect to hop count) to all other peer nodes in that cluster at the same level. If the size of a set of peer nodes has an upper bond then during dynamically splitting for cluster formation, the size of the set reaches up to . This happened, since at least one higher-level cluster head logically belongs to the lower level. So the set splits into two clusters of size . The data is transported from one level to another level through the cluster head. By default, the storage server stayed at the level , which logically belongs to every level. The choice of a cluster leader is very important during the joining of new peer node to the system at the specific level in the hierarchy. The new peer node required minimum number of query messages for finding its position. During the video chunk transportation, formed multicast tree is overlaid to the hybrid architecture. If there is number of peer nodes in the hierarch, the number of level is bounded with , where is a constant.

According to Figure 1, and are the cluster heads at the level . Those cluster heads physically stayed at the level , but cluster head is logically part of the level . Set of nodes are directly communicated with the logical cluster head . Similarly, the set of nodes are directly communicated with the logical cluster head for exchanging data and information. This type of hierarchal peer nodes distribution efficiently handles the problem regarding the live streaming of media in large P2P network [30]. The multicast tree construction and efficient clustering of peer nodes based on a hierarchy of bounded cluster size are efficiently improving the performance. The neighbors on the levelwise topology can exchange periodic soft state refreshed without generating high volume of control message. The hierarchal architecture successfully reduces the overburden traffic load to the system. The other important advantages include the peer node departure and smooth cluster head section and cost-effective maintenance of the cluster.

2. Chunk Video Object Transfer over Hybrid Architecture

Figure 2 presents the physical configuration of the overall structure of the video on demand system. Clusters of viewers are symbolically presented by the viewer node. The cache proxy server is connecting viewer nodes to the next level of peer nodes in the multitier architecture. The cache proxy server is installed at the highest level in reverse order in the proposed model of multitier architecture. The proxy cache servers are not directly connected to each other. They are connected to each other via next level peer nodes. All the levels including the lowest level, that is, the video storage server, form the multitier mesh shaped architecture. The work of cache proxy server at the highest level is to import the chunk of video objects from next level peer nodes.

Figure 3 presents the working mechanism of chunk transfer. The architecture is the composition of local and global network. The cache proxy server is the part of the local as well as the global network. are the peer nodes belonging to different level in the multitier architecture. node position is fixed for the main video storage. The initial request tries to serve locally from the local cache proxy server.

If the local proxy server cannot serve the chunk video object, then the request is broadcast to the neighboring peer nodes of the next level. In this fashion, the chunk request is forwarded among the peers. If any of the peer nodes possess the chunk video objects, that also rebroadcast the message chunk availability, then chunk video object is transferred according to the buffer map procedure. This is the mesh push mechanism from the sender peer node and mesh pull from the receiver peer. Now the copy of chunk video object is stored at the local cache proxy server before delivery to the viewer node. The preliminary survey of the literature shows that more than 106 numbers of user’s node can concurrently decode the live video streaming within the range of bits rate 400 to 800 Kbps [31]. Current HDMI H.264 video encoders for live streaming and broadcasting use bit rate 250 Kbps to 10 Mbps. Dynamic page replacement at the cache memory of the proxy server can enhance the performance of the video on demand [32] for the “analysis and implementation of large scale video on demand system” [5]. The literatures [33, 34] show that proper cache memory can reduce 50% to 60% of P2P traffic load. Simultaneous viewers are asynchronous; that is, any viewer can access any part of that video at any instance of time. Due to the playback option in video on demand system, a good number of sequential video objects are always available to the media player buffer of the viewers [3538]. In general, the viewer assesses that video through the cache proxy server. The video objects are a collection of video frames that feed the media player after sequential arranging from the received buffer of the viewer. If the complete video is not available to the cache proxy server, the unavailable part or parts of the chunk block or blocks are imported by the “buffer map via mesh pull approach” according to Figure 3. It uses the multichannel cache proxy server from any numbers of peers at any level in the multitier architecture. The similar buffer map concepts are used in the chunk transfer between the peers in the multitier mesh shaped architecture. The previously proposed video on demand architecture can be classified into some major categories.(i)Closed loop system: the client retrieves the video segments from the server.(ii)Prefix caching assisted periodic broadcast: it is a combined effect of open loop and closed loop system.(iii)Client equipped with set-top box: the set-top box stores locally part of some popular video.(iv)Peer to Peer network used for video on demand system: the video segments are transported through the network according to peer node requirement.In this paper, the proposed “distributed hybrid architecture” is a combination of P2P and mesh type network architecture. The cost of video object’s transfer through this structure successfully reduces both cases of static and dynamic environment.

3. Analytic Architecture

Let be the equivalent capacity of network at session . The minimum required bandwidth to ensure the aggregate stationary bit rate approaches with the considerable frame loss probability . The aggregate stationary bit rate is required to approximate the adaptive encoding run at the client side video playback. Aggregate bit rate maintains the video resolution within a certain standard. In a particular session, is the numbers of links that are active and simultaneously transfer the chunk of video objects from the peer nodes through the proxy servers in the multitier architecture. If the transferring stream or “chunk contents” are presented by the random vector , for , is the mean transferring rate for individual th viewers, without the loss of generality. We can assume that are the mutually independent random variables. The aggregate content transfer for a particular session is and ; this includes the smooth video objects transmission at interactive session like playback, moves forward, and so forth. So far, nothing is assumed about the behavior of for indefinite values of . According to the Chebyshev inequality for the small positive numbers and (previously assumed), the least number of active links is to be found for all , such thatNow,So and . Here is the approximate frame loss probability in a session. All the individual and shared individual links use the maximum bandwidth to transmit the chunk content. Clearly, . Here is capacity of the network. Since is the packet loss or bit loss probability, it maintains the aggregate stationary bit rate streaming which remains the same for the short burst period. Without the loss of generality, all the above inequality holds true for the event . The probability for the event isSo We can consider it asThe distribution of expression (3) can be obtained from the 2-state Markov chain. For every viewer node, there exists at least one active channel, including the case of broadcasting or multicasting of video data objects from the cache proxy server [32, 39]. It contains the full or part of the required video at any level of the pear nodes in multitier architecture. The video content is fully available to the cache proxy server. The proxy server is the part of mesh architecture, which is placed at the maximum level in hierarchy of the multitier architecture at strategic location. The proxy servers are not the peer server, so there are no interlinks between them. The main reason behind it is for the requirement of security and the billing process for commercial purpose also. The billing servers are installed at the level that is at level one in multitier architecture. Level zero that is is the position for video storage server. The proxy cache memory is updated or refreshed by the “least frequency frequently used” concepts [32, 4042]. is the level for video storage server and height level is for cache proxy server. If is the average fraction of full streaming at a session that each of the supplying peers shares or contributes during that session, clearly . Let be the total number of peer nodes including the main storage servers in the multitier architecture. is the upload capacity at level 0 of the mesh type Peer to Peer network. Let be the sum of the number of peer nodes at the level and . Now is the upload capacity of the peers at the level . So in a balanced state of traffic flow during the smooth chunk video objects transfer at any level depends on just lower-level peer’s nodes of the multitier architecture. In a particular session of synchronous chunk transfer, the upload capacity of the peer nodes at level is equivalent to the product of the average fraction of full streaming with the upload capacity of the participating peers at the level .

This is expressed by . In real scenario, peer nodes at the higher level get chunk video objects when the lower-level participating peers send the video objects for them. The throughput of the system is contributed by the average fraction of full streaming of participating peer nodes. Some portion of the bandwidth is always used for chunk transfer to peers at the same level according to Figure 2. So we get the expression .

Dynamic cluster connections are presented in Figure 4. We observe that , , and are the peer nodes at the level and , , , and are the peer nodes at the level . The dynamic clustering is a logical cluster construction among the peer nodes at every level for the real time requirement at any session. The peer nodes and at level exchange the chunk video objects or transfer among themselves. is the cluster head at level download video chunk from the dynamic cluster head “” at level . Now downloading chunk of video objects from the lower level gives priority over the “chunk video objective” transfers in the same level of multitier architecture in that session. Let us consider that is the number of peer nodes at the level . To approximate the value by , the processing is as follows: Finally, is approximate as .

Let us assume that is the probability that at least out of number of peer nodes actively participate in the level of the multitier architecture at th session. Active means that the peer node has the required portion of encoded video file and remains live during the chunk video object transfer in that session.

So we get the expressionSo during chunk transfer of video objects, the aggregate bit rates depend upon the active participation of the peer nodes at every level . The distribution of in the expression ((4) and (5)) can be obtained from the multistate Markov chain. is the sequence of active participation of peer nodes to transfer the chunk of video objects at the session in the th level. Since the download capacity at any level depends upon the uploading amount of just the lower level for that session, expression ((4) and (5)) becomesClearly, for session like session , by using expression (7), and after reordering the above sequence, we getAccording to (5), we getSince is the number of peer nodes at the th level in multitier architecture, in a session is the least required numbers of actively participating peer nodes at the th level in mesh architecture during smooth transfer of chunk video objective. According to the expression, (10) holds true and inequality (1) becomes :where is controlled by some threshold that depends on the adaptive bit rate for both sides of sender-receiver CODEC. The CODEC runs at the viewer’s end and the content transferring peers or proxy server end. To find the optimized cost for transferring the chunk of the video objects, now the issue comes to efficiently finding the video object chunk storage with minimum hop counts in multitier architecture. To keep the value at the lower level, the problem is to minimize the unnecessary searching packet forwarding and inter server gossiping. We propose the heuristic based path search for finding the chunk container of peer nodes for transferring video objects by using Algorithms 1 and 2. The estimated cost function is used to estimate the distance for possible destination of peer nodes in that particular session. The initial information about the higher-level peers is already available to the proxy server in multitier mesh architecture. For finding the peer, the search path is linear. So the cost of the path is optimized, that is, to avoid the unnecessary query forwarding to minimize the inter server gossiping. The path is considered the route to the farthest number in that session. The root is the proxy server. The distance is measured with the hop counts. In reality, more than one peer’s node may contain the required chunk of video objects. So the search path will not be always linear. In the path search computation at a session, some extra additional link exists. So we can express it in form of the expression:To find the least-cost path that is minimum hop count towards the fastest peer storage is considered. The consisting hashing in the structured overlays for the multitier Peer to Peer network is also being used. The request for the portion of the chunk video object is forwarded through the longest prefix match via Algorithm 2. The consistent hash function [43] is assigned to each node and key by a fixed bit identifier using a base hash function. A node’s identifier is chosen by hashing the node IP address, while a key identifier is produced by hashing the key. Here the used term “key” refers to both the original key and its image under the hash functions. Similarly, the term node refers to both the node and its identifier under the hash function. Consistent hashing is assigning key to node. In point to point network, name based consistent hashing maps key onto a node. Both keys and nodes are assigned with an -bit identifier. For nodes, this identifier is a hash of the node’s IP address. For keys, this identifier is a hash of a keyword, such as a file name or query string.

Input:  // The Query emits node
    // The Query sink node
  Initialize
   Buffer1 finite size
   Hop_count (every node) (−1) // peers at
                // every level
  Begin
  Query initiated from // Proxy server
  If (.node_id = =. node_id) && found (query massage)
   then
   Hop_count = 0
   Return Hop_count
   Output: chunk present in proxy server
   Exit
   End
  else
for each single hop node from adjutancy list of
Compute: Query forward to the longest prefix match node using Algorithm 2.
Buffer1 push ()
while (Bufer1 non empty) do
pop node () from Buffer1
  
  for each from // select all the
         //single hop neighbor of
   
   If (Hop_count() == (−1)) // peer node yet
            //not received the query
   then
   Hop_count () = Hop_count () +1;
  
  If ((node identifier()==node identifier && match (keyword_identifier))
   then // peer node having the query stream
   return Hop_count();
  Output: chunk present
  else // selected belongs to
         // single hop neighbor of
   // heuristic search
   return
   Hop_count () +1;
    // end search
   // end block
  else
  Output: “Search query stream does not exist in this network”.
   // End for each
   // End Pop
   // End While
  End begin
  Stop
Variable:
Prefix: list
: Array of Record
LPM, : Record // LPM is longest prefix match
, , : Array of Character
, , , : integer
Initialize
Prefix set of peer nodes
[].length Prefix length of th distinct length
[].hash Hash table
Storage server address
Range of the array// 32 bits for IPv4, 128
          // bits for IPv6
Binary string of size
Begin
;
while ( NULL) do
  ;
   = [].length; // longest possible length
   = [(Bit String of ) base 2] ;
   = Assign (Most significant bit of ;
  // Index position of the binary string associated
  // to in hash table.
   = Search (, [].hash);
  // search hash table for
   If ( = = Null) then = ;
   // search in lower half
   else ; // search in upper half
   if ( LPM) then break; else continue;
// end while
Update: Query forward to (.address)
End begin
Stop
3.1. Proposition and Algorithm

Statement. The hop count between query emitter node for the chunk of video object and query matched node (that node has chunk of video objects) is at most for . is the size of multitier network and is the base of the prefix for emitter node.

Proof. Let be the query initiative node and let be the query matched node. is the size of the adjutancy list for a node that has a base prefix . When the query emits node the same as the query matched node, that is, proxy server, then inequality (13) holds true for the single hop neighbor, and :When the query is forwarded through the intermediate nodes starting from the emitter node to sink node , the node has the adjustable list size with base of the prefix , has adjustable list size with base of the prefix , and so on; likewise, has the adjustable list size of with the base of the prefix . Query is forwarded by incremental prefix routing. So the path distance is expressed asThe number of branches maintained at the node is , the node maintains , the number of branches, and so on. Now maintains , number of branches. As single hop neighbor of , single hop neighbor of and single hop neighbor of . It concludes .
Expression (14) becomes .

Algorithms 1 and 2 are developed with the following consideration. The query message is generated from the proxy server. It is bits of a string header of the chunk video objects. There is more than one peer node present in the network. Those possess the required chunk video objects. The aim of the algorithm is to find the fastest least-cost path towards the desired peer node. The query message is routed to desire peer node via key identification.

4. Simulation Results and Discussion

The storage server contains the raw file (or coded video). The YUV file is created by decoding the raw file. At first, it is converted to .m4v file format. Then, we create the mp4 container and simultaneously create the reference videos. The frames are packetized for transport over the network. Here mp4trace command sends the .mp4 file to the destination peer node through the freely available port except the system port (usually selected ports above 5000) [44]. The transportation of these chunks of video objects, that is, “the sequence of packet frames,” is done with RTP/UDP. All this design and implementation of EvalVid-RA, a tool set for rate adaptive VBR (video bit rate) that is added to ns-2, are based on some modification to the EvalVid Version 1.2 tool and the ns-2 interfacing code. The ffmpeg command of EvalVid is used to control rate of some GOP before stabilizing the rate output. The VBR rate controller is set to 600 kbit/s for the individual session. The mesh shaped hybrid multitier architecture, including the cache proxy server, has 10 levels. The higher level is the cache proxy server and level is for the video storage server. Levels to possess the peer nodes in mesh shaped multitier. Each level has 4 peers that are interconnected. The peers that belong to the same level exchange buffer information to transfer the chunk video objects. The peers at the higher level are connected to lower-level peers in unicast; that is, the higher-level peers only download the chunk of video objects from the lower level. In the environment, we consider two types of simulation. In type one, the peers maintain dynamic connection. In dynamic connection, every peer has adjacent list size of 4 and 6. These links are selected from the available 11 links to the peer. The 11 links are orientated as 4 links for the unicast downloading from the lower-level peers, 4 links for the unicast uploading for the higher-level peers, and 3 links for the chunk transfer at the same level of peers. In the second type of simulation, we consider the static links 4 and 6 out of the 11 available links to the peer. In the static link, full content of the video is transferred from the storage server or the intermediate peers through the initial fixed setup links for every session. The link capacity maintains 400 kbps to 800 kbps speed. 400 kbp is the link capacity between the viewers to the cache proxy server. The 800 kbps is the link capacity of the storage server to the next higher server (i.e., billing server). The intermediate-level mesh peers maintain the link capacity of 600 kbps. The required parameter values and ranges are summarized in Table 1.

The confidence interval used in this simulation is 90%. For small prototype evaluation, we restricted the number of viewers connected to proxy server and adjacent list size of peer node. The viewers are connected to each proxy server; minimum is 20 and maximum is 40. The number of viewers connected to the proxy server can grow. The sum-up link capacity of the viewers connected to the proxy server is more than the intermediate link capacity.

The simulation result is based on the request initiative from the proxy server, when the viewers submitted his/her requests through the proxy server. In plotting Figures 510, we consider the vertical axis as the number of requests sent (cumulative) for the chunk video objects from the cache proxy server. The horizontal axis is the number of hops to the location of those chunk video objects. The request initiative from the cache proxy server is approximately equal to .

Figures 5 and 7 present the simulation result of minimum hop count to the required chunk of the video objects towards the peers or storage server. The peer nodes at th level, for , maintain dynamic adjacent list sizes 1 to 4 and 1 to 6. Figures 6 and 8 present the simulation result where every peer at every level maintains static adjacent list sizes 4 and 6. To search for destination peers address, we simply start with the longest length hash table and extract the first bits and do a search in the hash table for length entries. If we succeed, we have found the longest prefix match and thus our BMP (best match path); if not, we look at the first length smaller than previous match path (done by the array indexing one position less than the position) and continue the search. The search is forwarded by incremental prefix routing. Keeping hosts at each prefix digit is the base of the prefix. To minimize the hop count, we consider the number of links at each intermediate node dynamically changing from one to . So expression (12) will be always bounded by , and is the number of “video objects storage” peer nodes. So is the maximum hop to any destination. The comparative simulation results are presented in Figures 9 and 10. The simulation result shows that 43000 numbers of requests were served from the peer at the hop count 3 for the dynamic adjacent list size 4. We have noticed that 40000 numbers of requests were served from the peer at the hop count 3 for static adjacent list size 4. The chunk size is the same for both the cases. In Figures 9 and 10, we have noticed that the dotted line (dynamic) lies above the nondotted line (static) between the hop count 0 and hop count 3. In the next phase, the nondotted link lies above the dotted line between the hop counts 3 and 7. The longest prefix match with the dynamic adjacent list on the basis of “Heuristic Path Search Algorithm” gives better cost-effective results.

The simulation results with the dynamic adjacent list size 4 and static adjacent list size 4 are presented in Figure 9. The simulation result clearly shows that, in dynamic environment with list adjacent size 4, 35 × 102 numbers of video objects serve from the hop count 1. 130 × 102 numbers of video objects serve from hop count 2. 430 × 102 numbers of video objects serve from hop count 3. 370 × 102 numbers of video objects serve from hop count 4. 40 × 102 numbers of video objects serve from hop count 5. Static environment shows that, with adjacent list size 4, 30 × 102 numbers of video objects serve from hop count 1. 120 × 102 numbers of video objects serve from hop count 2. 360 × 102 numbers of video objects serve from hop count 3. 410 × 102 numbers of video objects serve from hop count 4 and 50 × 102 numbers of video objects serve from hop count 5. Here, we consider a rough estimate to measure the cost of video object transfer by using the metrics, such as “number of video objects serves” and hop counts. Cost is the sum of the “number of video objects serves ” multiplied by the “hop counts towards the container that possesses the video objects”; that is, . Here is the hop count towards the chunk video container peer node, and is an end of hop counts towards the container. The above expression gives the following cost score. Dynamic environment of simulation produced the score 3265 × 102 according to Figure 9. For static environment of simulation, the produced cost score is 3740 × 102 according to Figure 9. Similarly according to simulation the result is presented in Figure 10. That cost score in dynamic environment with adjacent list size 4 is 2840 × 102. The cost score in static environment with adjacent list size 6 is 2880 × 102. The cost score directly depends on the number of computations. The performance of the system increases as the score value decreases, during any stage of transfer of chunk video object through the multitier architecture.

5. Conclusions

In this paper, we have shown that the hybrid architecture of Peer to Peer network with mesh type’s network enhances the content delivery of video objects. The number of requests generated by the proxy server serves from the distributed chunk video object storage at the various levels of the multitier architecture. The simulation used the static as well as the dynamic growth links at each middle tire containing storage nodes. The result showed the effectiveness of the hybrid architecture over the pure Peer to Peer network. The previous works considered only pure P2P network. The hybrid type’s work successfully reduces the cost for the search path; hence it reduces the delay. Another aspect of the work is that it selects the required position of the video objects and supplies it to the desired node in the hybrid multitier architecture, which is the enhancement of previous Peer to Peer type stream data transmission. There is enough scope for further improvement.

Conflict of Interests

The authors Soumen Kanrar and Niranjan Kumar Mandal declare that there is no conflict of interests regarding the publication of the paper.

Acknowledgment

The authors are grateful to Sharmista Das Kanrar from Bishop Westcott, Ranchi, India.