Abstract

In this paper, we introduce the Reconfigurable Video Coding (RVC) standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL). CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

1. Introduction

User requirements of high quality video are growing which causes a noteworthy increase in the complexity of the algorithms of video codecs. These algorithms have to be implemented on a target architecture that can be hardware or software. In 2007, the notion of Electronic System Level Design (ESLD) has been introduced in [1] as a solution to decrease the time to market using high-level synthesis which is an automatic compilation of high-level description into a low-level one called register transfer level (RTL). The high-level description is governed by models of computation which are the rules defining the way data is transferred and processed. Many solutions were developed to automate the hardware generation of complex algorithms using ESLD. Synopsys developed a C to gate compiler called synphony [2]. Mentor Graphics also created a C to HDL compiler called Catapult C [3, 4]. For their NIOS II, Altera introduces C2H as a converter from C to HDL [5, 6]. To extend Matlab for hardware generation from functional blocks, Mathworks created a hardware generator for FPGA design [7]. In the university research field, STICC laboratory in France developed a high-level synthesis tool called GAUT that extracts parallelism and generates VHDL code from a pure C description [8, 9]. The common point between all previously quoted tools is the fact that they are application-specific generators which means that they are not always efficient on an entire multicomponent system description.

In this context, CAL [10] was introduced in the Ptolemy II project [11] as a general-use dataflow target agnostic language based on the dataflow Process Network (DPN) Model of Computation [12] related to the Kahn Process Network (KPN) [13]. The MPEG community standardized the RVC-CAL language in the MPEG RVC (Reconfigurable Video Coding) standard [14]. This standard provides a framework to describe the different functions of a codec as a network of functional blocks developed in RVC-CAL and called actors. Some hardware compilers of RVC-CAL were developed but their limitation is the fact that they cannot compile high-level structures of the language so these structures have to be manually transformed.

In [15], we presented an original functional method to quicken the HDL generation using a software platform for rapid design and validation of a high complexity dataflow architecture but going from high to low-level representation used to be manual. Therefore, we proposed to add automatic transformations to make any RVC-CAL design synthesizable.

This paper extends a preliminary work presented in [16] by introducing efficient optimizations and their impact on the area and time consumption of the design. The transformation tool analyzes the RVC-CAL code and performs the required transformations to obtain synthesizable code whatever the complexity of the considered actor. In Section 2, we explain the main advantages of using MPEG RVC standard for signal processing algorithms and the key notions of the RVC-CAL language and its behavioral structures and mechanisms. The proposed transformation process is detailed in Section 4 and finally hardware implementation results of MPEG4 Part2 decoder and LAR codec are presented in Sections 5 and 6.

2. Background

Since the beginning of ISO/IEC/WG11 (MPEG) in 1988 with the appearance of MPEG-1, many video codecs have been developed (MPEG-4 part2, MPEG SVC, MPEG AVC, HEVC, etc.) with an increasing complexity and so they take longer time to be produced. In addition, every standard has a set of profiles depending on the implementation target or the user specifications. Consequently, it became a tough task for standard communities to develop, test, and standardize a decoder at any given time. Moreover, the standards specification is monolithic which makes it harder to reuse or update some existing algorithms. This ascertainment originated a new conception methodology standard called Reconfigurable Video Coding introduced by MPEG.

In the following, we present an overview of MPEG RVC standard and associated tools and frameworks, we also present the main features of CAL actor language and the limitations that motivated this work.

2.1. MPEG RVC

RVC presents a modular library of elementary components (actors). The most important and attractive features of RVC are reconfigurability and flexibility. An RVC design is a dataflow directed graph with actors as vertices and unidirectional FIFO channels as edges. An example of a graph is shown in Figure 1.

Actually, defining video processing algorithms using elementary components is very easy and rapid with RVC since every actor is completely independent from the rest of the other actors of the network. Every actor has its own scheduler, variables, and behavior. The only communication of an actor are its input ports connected to the FIFO channels to check the presence of tokens and as explained later an internal scheduler is going to allow or not the execution of elementary functions called actions depending on their corresponding firing rules (see Section 3). Thus, RVC insures concurrency, modularity, reuse, scalable parallelism, and encapsulation. In [17], Janneck et al. show that, for hardware designs, RVC standard allows a gain of 75% of development time and considerably reduces the number of lines compared with the manual HDL code. To manage all the presented concepts of the standard, RVC presents a framework based on the use of the following.(i)A subset of the CAL actor language called RVC-CAL that describes the behavior of the actors (see details in Section 2.2).(ii)A language describing the network called FNL (Functional unit Network Language) that lists the actors, the connections and the parameters of the network. FNL is an XML dialect that allows a multilevel description of actors hierarchy which means that a functional unit can be a composition of other functional units connected in another network.(iii)Bitstream syntax Description Language (BSDL) [18, 19] to describe the structure of the bitstream.(iv)An important Video Tool Library (VTL) of actors containing all MPEG standards. This VTL is under development and it already contains 3 profiles of MPEG 4 decoders (Simple Profile, Progressive High Profile and Constrained Baseline Profile).(v)Tools for edition, simulation, validation and automatic generation of implementations:(a)open DF framework [20] is an interpreter infrastructure that allows the simulation of hierarchical actors network. Xilinx contributed to the project by developing a hardware compiler called OpenForge (available at http://openforge.sourceforge.net/) [21] to generate HDL implementations from RVC-CAL designs.(b)open RVC-CAL Compiler (Orcc) (available at http://orcc.sourceforge.net/) [19] is an RVC-CAL compiler under development. It compiles a network of actors and generates code for both hardware and software targets. Orcc is based on works on actors and actions analysis and synthesis [22, 23]. In the front-end of Orcc, a graph network and its associated CAL actors are parsed into an abstract syntax tree (AST) and then transformed into an intermediate representation that undergoes typing, semantic checks and several transformations in the middle-end and in the back-end. Finally, pretty printing is applied on the resulting IR to generate a chosen implementation language (C, Java, Xlim, LLVM, etc.).

At this level, the question is that why RVC-CAL and not C? Actually, a C description involves not only the specification of the algorithms but also the way inherently parallel computations are sequenced, the way data is exchanged throw inputs and outputs, and the way computations are mapped. Recovering the original intrinsic properties of the algorithms by analyzing the software program is impossible. In addition, the opportunities for restructuring transformations on imperative sequential code are very limited compared to the parallelization potential available on multi-core platforms. For these reasons, RVC adopted the CAL language for actors specification. The main notions of this language are presented below.

2.2. CAL Actor Language

The execution of an RVC-CAL code is based on the exchange of data tokens between computational entities (actors). Each actor is independent from the others since it has its own parameters and finite state machine if needed. Actors are connected to form an application or a design, this connection is insured by FIFO channels. Executing an actor is based on firing elementary functions called actions. This action firing may change the state of the actor in case of an FSM. An RVC-CAL dataflow model is shown in the network of Figure 2.

Figure 3 presents an example of a CAL actor realizing the sum between two tokens read from its two input ports.

Like in VHDL, an actor definition begins by defining the I/O ports and their types then actions are later listed. An action begins also by defining the I/O ports it uses from the list of ports of the actor and this definition includes the number of tokens this action have to find in the FIFO to be fireable. In the “sum” actor, the internal scheduler allows action “add” only when there is at least one token in the FIFO of port “INPUT1” and one token in the FIFO of port “INPUT2” and this property explains how an actor can be totally independent and can neither read nor modify the state of any other actor. Of course, an actor may contain any number of actions that can be governed by an internal finite state machine. At a specific time two or more actions may have the required conditions to be fired so the notion of priority was introduced (see details in Section 3).

For the same behavior, an actor may be defined in different ways. Let us consider the “sum-5” actor of Figure 4 that reads 5 tokens in a port “IN,” computes their sum and produces the result in a port “OUT.”

In Figure 4(a), the required algorithm is defined in only one action. The condition of 5 required tokens is expressed by the instruction “repeat 5.” Action “add” fires by consuming the 5 tokens from the FIFO into an internal buffer “I.” After data storage, the algorithm of the action is applied. Finally the action firing finishes by writing the result in the port “OUT”.

Such description is very fast to develop and implement on software targets but for hardware implementations a multitoken read is not appropriate. This is the reason of developing the equivalent monotoken code of Figure 4(b). In this description, we use a finite state machine to lock the actor in the state “state0.” While counter5, only the action “read” can be fired to store tokens one per one in “data” buffer. Once the condition of action “read_done” (counter = 5) is true, both of “read” and “read_done” actions are fireable. This is why the priority “read_done read” is important to keep the determinism of the actor. Finally, the firing of “read_done” action involves an FSM update to “state1” where only “process” action can be fired and the actor is back to the initial state.

3. Actor Behavior Formalism

Actor execution is governed by a set of conditions called firing rules. Moreover, during this firing many internal features of the actor are updated (state, state variables, etc.). All these concepts and behavior evolutions are detailed below. The actor execution, so called firing, is based on the dataflow Process Network (DPN) principle [12] derived from the Kahn Process Network (KPN) [13]. Let Ω be the universe of all tokens values exchanged by the actors and 𝕊=Ω, the set of all finite sequences in Ω. We denote the length of a sequence 𝑠𝕊𝑘 by |𝑠| and the empty sequence by 𝜆. Considering an actor with 𝑚 inputs and 𝑛 outputs, 𝕊𝑚 and 𝕊𝑛 are the set of 𝑚-tuples and 𝑛-tuples consumed and produced. For example, 𝑠0=[𝜆[𝑡0,𝑡1,𝑡2]] and 𝑠1=[[𝑡0],[𝑡1]] are sequences of tokens that belong to 𝕊2 and we have |𝑠0|=[0,3] and |𝑠1|=[1,1].

3.1. Actor Firing

A dataflow actor is defined with a pair 𝑓,𝑅 such as:(i)𝑓𝕊𝑚𝕊𝑛 is the firing function;(ii)𝑅𝕊𝑚 are the firing rules;(iii)for all 𝑟𝑅, 𝑓(𝑟) is finite.

An actor may have 𝑁 firing rules which are finite sequences of 𝑚 patterns (one for each input port). A pattern is an acceptable sequence of tokens for an input port. It defines the nature and the number of tokens necessary for the execution of at least one action. RVC-CAL also introduces the notion of guard as additional conditions on tokens values. An example of firing rule 𝑟𝑗 in 𝕊2 is 𝑔𝑗,𝑘[𝑥]𝑥>0𝑟𝑗=𝑡0𝑔𝑗,𝑘,𝑡1,𝑡2,𝑡3,(1) Equation (1) means that if there is a positive token in the FIFO of the first input port and 3 tokens in the FIFO of the second input port then the actor will select and execute a fireable action. An action is fireable or schedulable iff:(i)the execution is possible in the current state of the FSM (if an FSM exists);(ii)there are enough tokens in the input FIFO;(iii)a guard condition returns true.

An action may be included in a finite state machine or untagged making it higher priority than FSM actions.

3.2. Actor Transition

The FSM transition system of an actor is defined with 𝜎0,Σ,𝜏, where Σ is the set of all the states of the actor, 𝜎0 is the initial state, is a priority relation and 𝜏Σ×𝕊𝑚×𝕊𝑛×Σ is the set of all possible transitions. A transition from a state 𝜎 to a state 𝜎 with a consumption of sequence 𝑠𝕊𝑚 and a produced sequence 𝑠𝕊𝑛 is defined with (𝜎,𝑠,𝑠,𝜎) and denoted. 𝜎𝑠𝑠𝜎𝜏.(2)

To solve the problem of the existence of more than one possible transition in the same state, RVC-CAL introduced the notion of priority relation such as for the transitions 𝑡0,𝑡1𝜏, 𝑡0 a higher priority than 𝑡1 is written 𝑡0𝑡1. As explained in [24] a transition 𝜎𝑠𝑠𝜎𝜏 is enabled iff: ¬𝜎𝑝𝑞𝜏𝜎𝜏𝑝𝕊𝜎𝑠𝑠𝜏𝜎𝜎𝑠𝑠𝜎𝜏.(3)

This section presented and explained the main RVC-CAL principles. In the next section we present an automatic transformation as a solution to avoid these limitations without changing the overall macrobehavior of the actor.

3.3. Hardware Generation Problematic

A firing rule is called multitoken i𝑒|𝑠|𝑒>1 otherwise it is called a monotoken rule. The limitation of OpenForge is the fact that it does not support multitoken rules which are omnipresent in most actors. The observation of Figure 4 shows the incontestable complexity difference between the multitoken (a) and the monotoken (b) code. Moreover, manually changing a CAL code from high-level to low-level by creating the new actions, variables and state machine is contradictory to the main purpose of RVC standard which is the fact that CAL is a target agnostic language so we have to write in CAL the same way for hardware of software implementation. Our work consists in automatically transforming the data read/write processes from multitoken to monotoken while preserving the same actor behavior. All the required actions, variables and finite state machines are created and optimized directly in the Intermediate representation of Orcc compiler. The following section explains the achieved transformation mechanism.

4. Methodology for Hardware Code Generation

As shown in Figure 5, our transformation acts on the IR of Orcc. The HDL implementation is later generated using OpenForge.

4.1. Actor Transformation Principle

Let us consider an actor with a multitoken firing rule 𝑟𝕊𝑘 such as |𝑟|=[𝑟0,𝑟1,,𝑟𝑘1], this rule fires a multitoken action 𝑎 realizing the transition 𝑠𝑜𝑢𝑟𝑐𝑒𝑎𝜏𝑡𝑎𝑟𝑔𝑒𝑡 and 𝕀 the set of all input ports. The transformation creates for every input port an internal buffer with read-and-write indexes and clips 𝑟 into a set of 𝑘 firing rules so that: 𝑖𝕀,!𝜌𝜌𝕊1𝕊0|𝑟|=1𝑔𝜌𝐼𝑑𝑥𝑊𝑟𝑖𝑡𝑒𝑖𝐼𝑑𝑥𝑅𝑒𝑎𝑑𝑖𝑠𝑧𝑖,(4) with 𝜌 a monotoken firing rule of an untagged action 𝑢𝑛𝑡𝑎𝑔𝑔𝑒𝑑𝑖, 𝑔𝜌 is the guard of 𝜌, and 𝑠𝑧𝑖 the size of the associated internal buffer defined as the closest power of 2 of 𝑟𝑖. This guard checks that the buffer contains an empty place for the token to read. The multitoken action is consequently removed, and new read actions that read one token from the internal buffers are created. While reading tokens another firing rule may be validated and causes the firing of an unwanted action. To avoid the nondeterminism of such a case, we use an FSM to put the actor in a reading loop so it can only read tokens. The loop is entered using a 𝑡𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛 action realizing the FSM passage 𝑠𝑜𝑢𝑟𝑐𝑒𝑡𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛𝜏𝑟𝑒𝑎𝑑 and has the same priority order of the deleted multitoken action but has no process. The read actions loop in the read state with the transition 𝑡=𝑟𝑒𝑎𝑑𝑟𝑒𝑎𝑑𝜏𝑟𝑒𝑎𝑑. Then the loop is exited when all necessary tokens are read using a read done action and a transition to the process state 𝑡=𝑟𝑒𝑎𝑑𝑟𝑒𝑎𝑑𝐷𝑜𝑛𝑒𝜏𝑝𝑟𝑜𝑐𝑒𝑠𝑠𝑡. The treatment of the multitoken action is put in a process action with a transition 𝑝𝑟𝑜𝑐𝑒𝑠𝑠𝑝𝑟𝑜𝑐𝑒𝑠𝑠𝜏𝑤𝑟𝑖𝑡𝑒. The multitoken outputs are also transformed into a writing loop with write actions that store data directly in the output FIFO associated with a transition 𝑤=𝑤𝑟𝑖𝑡𝑒𝑤𝑟𝑖𝑡𝑒𝜏𝑤r𝑖𝑡𝑒 and a write done action that insures the FSM transition 𝑤=𝑤𝑟𝑖𝑡𝑒𝑤𝑟𝑖𝑡𝑒𝐷𝑜𝑛𝑒𝜏𝑡𝑎𝑟𝑔𝑒𝑡𝑤.

For example, the actor 𝐴 of Figure 6 is defined with

𝑓𝕊3𝕊2 with a multitoken firing rule:

𝑟𝕊3𝑟=[[𝑡0,𝑡1],[𝑡2,𝑡3,𝑡4],[𝑡5]].

Consequently, |𝑟|=[2,3,1] which means that there is an action in 𝐴 that fires if 2 tokens are present in 𝐼𝑁1 port, 3 tokens are present in 𝐼𝑁2 and one token is present in 𝐼𝑁3. The transformation creates the FSM macroblock of Figure 7.

4.2. FSM Creation Cases

We consider an example of an actor defined as 𝑓𝕊3𝕊2 containing the actions 𝑎1𝑎5 such as 𝑎3 is the only action applying a multitoken firing rule 𝑟𝕊3.

Creating an FSM only for action 𝑎3 is not appropriate because 𝑎1, 𝑎2, 𝑎4, 𝑎5 will be a higher priority which may not be true. The solution is to create an initial state containing all the actions and add the created FSM macroblock of 𝑎3 (previously presented in Figure 7). The resulting FSM is presented in Figure 8.

We now suppose the same actor scheduled with an initial FSM as shown in Figure 9.

The transition 𝑡=𝑆1𝑠𝑠𝜏𝑆2 is substituted with the macroblock of 𝑎3 as shown in Figure 10.

4.3. Optimizations

To improve the transformation, some optimization solutions were added. In the previously presented transformation method we used the untagged actions to store data in the internal buffers, then we used read actions to peek the required tokens from the internal buffers using R/W indexes and masks. To preserve the schedulability, the action is split into a transition action that contains the firing rule and a process action that applies the algorithm. The proposed optimization consists in making the action reading directly from the internal buffers. The firing rule of the action is transformed as presented in (4) to detect the presence of enough data in the internal buffers. Let us reconsider the basic example of the “sum-5” actor of Figure 4 of Section 2.2. The transformation explained above and the optimized transformation of this actor are presented in Figure 11. This actor is transformed this way. First an internal buffer and an untagged action are created to store data inside the actor. The input pattern of the read action is transformed into a connection to the internal buffer. Every read or write from the internal buffer must be masked to make the modulo of th buffer size since it is circular.

5. RVC Case of Study: MPEG 4 SP Intradecoder

To assess the performance of the previously presented transformation, we applied it on the whole MPEG 4 simple profile intradecoder. This choice is explained by the fact that there exists a stable design in the VTL and also because this decoder includes various image processing algorithms with more or less complexity. In the following we present an overview of this codec architecture and basic actors. We also present the implementation results and a comparison with an academic high-level synthesis tool called GAUT.

5.1. Concept

MPEG codecs have all a common design. It begins with a parser that extracts motion compensation and texture reconstruction data. The parser is then followed by reconstruction blocs for texture and motion and a merger as presented in Figure 12. This decoder is a full example of coding techniques that encapsulates predictions, scan, quantization, IDCT transform, buffering, interpolation, merging and especially the very complex step of parsing.

Table 1 gives an idea about the complexity of parsers in MPEG 4 Simple Profile and MPEG Advanced Video Coding (AVC).

Actors of Figure 12 are the main functional units some of them are hierarchical composition of actor networks. An actor may be instantiated more than one time so for 27 FU there are 42 actor instantiations.

5.2. Implementation and Results

The achieved automatic transformation was applied on MPEG4 SP intradecoder (see design in Orcc Applications (available at http://orcc.sourceforge.net/)) which contains 29 actors. We omitted the inter decoder part because it is very memory consuming. The HDL generated code was implemented on a virtex4 (xc4vlx160-12ff1148) and the area consumption results we obtained are presented in Table 2. The removal of read actions buffers and process actions had an important impact on the area consumption since it has decreased about 50%.

After the synthesis of the design, we applied a simulation stream of compressed videos. Table 3 below presents the timing results of a CIF (352×288) image size video.

We notice that timing results were partially improved. This is due to the presence of division operations in some actors. In our transformation we replaced divisions by an Euclidean division which is very costly and time consuming. The impact is noticeable since these divisions reduced the maximum frequency by 60%. Therefore, we applied the transformation on the inverse discrete cosine 2D transform (IDCT2D). We chose this actor because it contains very complex algorithm, functions and procedures. We tried to compare with an optimal low-level architecture designed by Xilinx experts and also with an existing implementation study of a direct VHDL written algorithm in [25]. For a significant comparison, we used the same implementation target of the study which is the Xilinx Spartan 3 XC3S4000. Timing and area consumption results comparison are presented in Tables 4 and 5.

Obviously, Table 5 reveals that area results for the optimized design are very close to those of the Xilinx low-level design. This property is noted for all actors containing more computing algorithms then data control and management algorithms. Concerning the area consumption of the VHDL design, it is expectable to find results nearby the optimal design and clearly worse than the Xilinx design and this is due to the synthesis constraints indicated in [25] that favor treatment speed in spite of the surface. This is what explains also the very high FPS rate of the design presented in Table 4. Timing results of the other designs show that the optimized design performances are far from the optimal Xilinx design. This is due to the low level architecture made by Xilinx experts which is completely different and oriented for hardware generation. This architecture is a pipelined set of actors realizing the IDCT2D (rowsort, fairmerge, IDCT1D, separate, transpose, retranspose, and clip) which is a relatively complex design compared with the high-level IDCT2D code used for the transformation.

After comparing with the Xilinx design and a VHDL directly written design, we compared our results with existing generation tools and we considered GAUT hardware generator. This tool is an academic high-level synthesizer from C to VHDL. It extracts the parallelism and creates a scheduled dependency graph made of elementary operators. Potentially, GAUT synthesizes a pipe-lined design with memory unit, communication interface and a processing unit. However, like most existing hardware generators, GAUT is not able to manage a system level design with very high complexity and a variety of processing algorithms. Moreover, there are so many restrictions on the C input code to have a functioning design. As it was impossible to test the whole MPEG 4 decoder we chose the IDCT2D algorithm to have a comparison with previously presented results.

The IDCT2D is so generated with GAUT and we obtained the results of Table 6 below.

Results show that the optimized transformation generates a better design even for the specific case of study of the IDCT2D.

6. Still Image Codec: LAR Case of Study

The LAR is a still-image coder [26] developed at the IETR/INSA of Rennes laboratory. It is based on the idea that the spatial coding can be locally dependent on the activity in the image. Thus, the higher the activity the lower the resolution is. This activity is dependent from the variation or the uniformity of the local luminance which can be detected using a morphological gradient. In the following, we detail coding principle of the LAR and we present the implementation techniques and results using the automatic transformation approach.

6.1. Concept

The LAR coding is based on considering that an image is a superposition of a global information image (mean blocks image), and the local texture image, which is given by the difference between the original image and the global one. This principle is modeled by 𝐼=𝐼+𝐼𝐼𝐸,(5) where 𝐼 is the original image, 𝐼 is the global information image and 𝐼𝐼 is the error image, 𝐸. The dynamic range of the error image is consequently dependent on the local activity. In uniform regions, 𝐼 values are close or equal to I consequently 𝐼𝐼 values are around zero with a low dynamic range.

Considering these principles, the LAR coder concept (Figure 13) is composed of two parts: the FLAT LAR [27] which is the part insuring the global information coding and the spectral part which is the error spectral coder.

Different profiles have been designed to fit with different types of application. In this paper, we focus on the baseline coder. Its mechanisms are detailed in the following.

The FLAT LAR
The Flat LAR is composed of 3 main parts: the partitioning, the block mean value computation and the DPCM (Differential Pulse Coding Modulation). In our work, only the DPCM is not yet developed with RVC-CAL.(i)Partitioning: in this part, a Quad-tree partitioning is applied on the image pixels. The principle is to consider the lowest block size (2×2) then to compare the difference between the maximum (MAX) and the minimum (MIN) values of the block with a threshold (THD) defined as a generic variable for the design. If (MAX−MIN) > THD then the actual block size is considered. In the other case, the (𝑁×2)×(𝑁×2) size block is required. this process is recursively applied on the whole image blocks. The output of the overall is the block size image.(ii)Block mean values computation process: this process is based on the Quad-tree output image. For each block of the variable size image, a mean value is put in the block as presented in the example of Figure 14.(iii)The DPCM: the DPCM process is based on the prediction of neighbor values and the quantization of the block mean value image. The observation that a pixel value is mostly equal to a neighbor one led to the following estimation algorithm. If we consider the pixels in Figure 15, 𝑋 value is estimated with the following algorithm:If |𝐵𝐶|<|𝐴𝐵| then 𝑋=𝐴 else 𝑋=𝐶.

The Hadamard Transform
The spectral coder, also called the texture coder, is composed of a variable block size Hadamard transform [28] and the Golomb-Rice [29, 30] entropy coder. The Golomb-Rice coder is still in development with the RVC-CAL specifications.
The Hadamard transform derives from a generalized class of the Fourier transform. It consists of a multiplication of a 2𝑚×2𝑚 matrix by an Hadamard matrix (𝐻𝑚) that has the same size. The transform is defined as follows.
𝐻0 is the identity matrix so 𝐻0=1. For any 𝑚>0, 𝐻𝑚 is then deducted recursively by: 𝐻𝑚=12||||||𝐻𝑚1𝐻𝑚1𝐻𝑚1𝐻𝑚1||||||.(6)
Here are examples of Hadamard matrices: 𝐻0=1,𝐻1=12||||||1111||||||,𝐻2=12||||||||||||1111111111111111||||||||||||,andsoforth.(7)

6.2. Implementation and Results

This Section explains the mechanisms of the Hadamard transform and the Quad-tree used in the implementation.

6.2.1. Hardware Implementation

The LAR coding is dependent from the content of the image. It applies in the Quad-Tree a morphological gradient to extract information about the local activity on the image. The output is the block size image represented by variable size blocks: 2×2, 4×4 or 8×8. Using the block size image, the Hadamard transform applies the adequate transform on the corresponding block. It means that if we have a block size of 2×2 in the size image this block will undergo a 2×2 Hadamard (𝐻1) and a normalization specific to the 2×2 blocks. This process is identically applied for 4×4 and 8×8 blocks. A quantization step, adapted to current block size, is applied on the Hadamard output image. For each block size, a quantization matrix is predefined. Practically, the normalization during the Hadamard transform is postponed to be achieved with the quantization step so that to decrease the noise due to successive divisions.

The implemented LAR is presented in Figure 16.

As a first step, the memory management block stores the pixels values of the original image line by line. Once an 8×8 block is obtained, the actor divides it into sixteen 2×2 blocks and sends them in a specific order as presented in Figure 18.

This order is very important to improve the performance of remaining actors. In fact, considering the Figure 18, when the tokens are so ordered the first 4 tokens correspond to the first 2×2 block, the first 16 tokens to the first 4×4 block, and so forth Consequently, and as presented in Figure 16, the output of the 𝐻1 is automatically the input of the 𝐻2 and the output of the 𝐻2 is automatically the input of the 𝐻3.

In the Quad-tree, this order is also crucial. As presented in Figure 17, the superposition of the same actor (max for example) three times provides in the output of the first actor the maximum values of 2×2 blocks, in the output of the second actor the maximum values of 4×4 block and finally the maximum values of 8×8 blocks in the output of the third one. Using the maximum values and the minimums the morphological gradient in the Gradstep actors can process to extract the block size image. The same tip is used to calculate the block sums with three superposed sum actors. The block mean value actor considers the sums and the sizes to build the block mean value image.

We also notice that an (𝐻2) transform can be achieved using the (𝐻1) results of the four 2×2 blocks constituting the 4×4 block. The same observation can be made for the (𝐻3) one. This ascertainment is very important to decrease the complexity of the process. In fact, the Hadamard transform of the LAR applies an (𝐻1) transform for the whole image then it applies the (𝐻2) transform only for the 4×4 and 8×8 blocks and the (𝐻3) transform only for the 8×8 blocks. The (𝐻2) and the (𝐻3) transforms are different from the full transforms as they are much less complex. Consequently, as shown in Figure 16, we designed the 𝐻2 and the 𝐻3 using 𝐻1 actors associated with memory management units. They sort tokens in the adequate order and, considering the block size, whether the block is going to undergo the transform or not.

It is very important to mention that almost actors have been developed with generic variables for memory sizes or gradsteps which means that the design are flexible for easy transformation from an image size to another or for adding higher Hadamard process (𝐻4, 𝐻5, etc.).

In [15], we added some optimizations on the processes using a Ping-Pong memory management algorithm [31] to pipeline the process.

6.2.2. Results and Comparison

As mentioned above, this work aims at comparing hardware implementation performances of the same LAR architecture generated with the optimized automatic transformation and with a manual transformation. The achieved automatic transformation was applied on the 23 actors of the LAR using Orcc. The HDL generated code was implemented on a virtex4 (xc4vlx160-12ff1148). The area consumption results obtained are presented with those of manual transformations in Table 7.

After the synthesis of the design, we applied a simulation stream of compressed videos. Table 8 below presents the timing results of a CIF (352×288) image size video.

For area consumption, the difference is not considerable for LUTs and occupied slices and it can be explained by the fact that the transformation applies a general modification whatever the complexity of the actor. Also, the fact of creating an internal buffer for every input port involves more area consumption.

Concerning the timing results, the automatic and the manual transformed designs performances remain close and acceptable. The latency difference is explained by the fact that the untagged actions, as always given priority over the rest of actions, promote the data reading. It means that, as long as there is data in the FIFO, the untagged action fires even if there are enough data to fire the processing actions. This problem will also be resolved by further optimizations of the buffer size.

7. Conclusion

This paper presented an automatic transformation of RVC-CAL from high- to low-level description. The purpose of this work is to find a general solution to automate the whole hardware generation flow from system level. This transformation allows avoiding structures that are not understandable by RVC-CAL hardware compilers. We applied this automatic transformation on the 29 actors of MPEG4 part2 video intradecoder and successfully obtained the same behavior of the multitoken design and a synthesizable hardware implementation. To change the test context, we automatically transformed a high-level design of the LAR still image codec and obtained relatively acceptable results.

Several optimization processes were added to the transformation to reduce the area consumption about 50%. The transformation process is currently generalized for all actors.

The most important in this work is that we contributed in making RVC-CAL hardware generation very rapid with an average gain of 75% of conception, development, and validation time compared with manual approach. We insured that the generation is applicable at system level whatever the complexity of the actor.

Currently, improvements are also in progress to customize the transformation depending on the actor complexity analysis. A future work will be the study of the impact of the transformation on the power consumption of the generated implementation.

Acknowledgments

Special thanks to Matthieu Wipliez, Damien De Saint-Jorre, and Hervé Yviquel for their relevant contributions in the source code.