Abstract

Image registration is a powerful tool in medical image analysis and facilitates the clinical routine in several aspects. There are many well established elastic registration methods, but none of them can so far preserve discontinuities in the displacement field. These discontinuities appear in particular at organ boundaries during the breathing induced organ motion. In this paper, we exploit the fact that motion segmentation could play a guiding role during discontinuity preserving registration. The motion segmentation is embedded in a continuous cut framework guaranteeing convexity for motion segmentation. Furthermore we show that a primal-dual method can be used to estimate a solution to this challenging variational problem. Experimental results are presented for MR images with apparent breathing induced sliding motion of the liver along the abdominal wall.

1. Introduction

Image registration became an indispensable tool for many medical applications including the Image-Guided Therapy systems. Today’s image registration methods [1] are powerful at handling rigid as well as nonrigid motion on mono- and multimodal images. With the introduction of imaging technologies capable of capturing 4D organ motion, such as 4D-MRI [2], 4D-CT, or 4D-Ultrasound, imagery of sliding organs became a significant focus of research. Most of the state-of-the-art image registration methods cannot yet properly deal with these data sets, as their regularization causes continuous motion fields over the organ boundaries.

In recent publications, the concept of the direction-dependent regularization of the displacement field has been introduced; see, for example, [3]. The drawback of these methods is the necessity of providing a good manual segmentation of the boundaries. Approaches that do not rely on prior manual segmentations have been a topic of research for some decades in classical computer vision such as optical flow. Transfer of these research results into the medical field was however scarce. Already in 1989, Mumford and Shah [4] proposed in their pioneering work a functional for image segmentation that avoids spatial smoothing in certain locations of the image, thus preserving discontinuities. Some years later, Weickert and Schnörr [5] proposed an extension of nonquadratic variational regularization for discontinuities preserving optical flow that uses spatiotemporal regularizers instead of flow-driven spatial smoothness. The resulting convex function guaranteed global convergence that could be solved with standard gradient based optimisation schemes. Vese and Chan [6] then introduced a level set framework based approach to efficiently solve the Mumford and Shah minimization problem for segmentation. Another influential approach based on the Total Variation (TV) norm, known to preserve discontinuities, was proposed by Rudin et al. [7], the ROF model. The beneficial behaviour of the TV norm was also exploited in recent registration and optical flow methods, as, for example, by [8, 9].

Motion segmentation has been a topic of research for quite some time. In 1993, Nesi [10] proposed a variational optical flow approach that incorporates a variable for the presence of discontinuities and leads to a piecewise smooth motion estimation. Cremers and Schnörr introduced in [11] a variational approach seamlessly integrating motion segmentation and shape regularization into one single energy functional. In [12], the authors formulated the motion segmentation problem in a Bayesian inference framework, whereas the motion discontinuity could be represented either as an explicit spline function or by an implicit level set formulation allowing for an arbitrary number of objects to segment. As motion discontinuities mainly appear at object boundaries, it seems natural to combine motion segmentation and registration. Given a good motion segmentation, a proper discontinuous motion field can be estimated and vice versa. Amiaz and Kiryati [13] tried to leverage this property by combining motion segmentation with the optical flow method of Brox et al. [8] in a level set framework. They showed that motion segmentation leads to better discontinuities in the motion field. However, their approach highly depends on the initialisation for the motion segmentation and displacement field. Furthermore, the level set approach is prone to local minima and a rather good initialisation has to be provided in advance.

In this paper we propose an elastic registration approach that handles discontinuities in the motion field by combining motion segmentation and registration in a variational scheme. For motion segmentation we make use of the so-called “continuous cuts” [14, 15] scheme that guarantees a globally optimal solution for a fixed motion field. In contrast to our previous work [16], the proposed energy functional is this time TV- regularized for both the motion segmentation and the displacement field and the fidelity term is formulated as a sum of absolute values of the constraints. Minimizing the TV--regularized functionals is, however, inherently difficult due to the nonsmoothness of the TV. This is an active field of research, such as [1720] and the references therein which are mainly based on operator splitting methods in convex analysis that split energy functionals into the nonlinear and linear terms. We show how to solve the proposed complex energy functional with the primal-dual method of Chambolle and Pock [17], which is very efficient for a wide class of nonsmooth problems. The proposed method is then applied to register 2D MR image pairs with apparent breathing induced sliding motion of the liver along the abdominal wall. The preliminary version of this paper has been published in [21].

2. Method

In this section we describe the proposed registration method which integrates the displacement field estimation into the convex segmentation method of Chan et al. [14], in order to find smooth displacement fields whilst preserving the discontinuities.

2.1. Registration

Let be the domain of the pixel positions . We then define by the functions and our reference and template image. The aim of image registration is to find a transformation such that the relation holds. The function with , describes the displacement field and will be the intrinsic function we investigate. For convenience we will use the abbreviations , , and for , , and .

To solve a nonrigid registration problem with expected discontinuities in the displacement field, we want to follow a variational approach and we therefore first consider the energy functionalwhere is a weighting parameter and and are the fidelity term and the smoothness term, respectively, which are defined asThe fidelity term incorporates the constraints for the grey value constancy and the gradient constancy and the corresponding weighting parameters are given by . The smoothness term results in the -norm and, respectively, the vectorial TV of .

The energy functional is motivated by the energy functional proposed by Brox et al. [8]. Instead of using the approximation function for the norm, here the pure norm is used for the smoothness term and for the fidelity term the sum of the absolute grey value difference and the componentwise absolute gradient differences is taken.

2.2. Motion Segmentation

Now we would like to integrate the formulation of the energy functional above into the convex segmentation model of Chan et al. [14]. To differentiate the displacement field into and , we therefore choose a binary function where , with . By defining as a data term, we formulate our energy functional aswhere the last term of the above energy is regularization defined by the TV norm and is a constant parameter.

The energy functional (6) is very much related to the energy functional proposed by Amiaz and Kiryati [13]. Instead of applying the Heaviside function to a level set function, the binary function is used here. Furthermore the approximation function for the norm, which was used in [13], is omitted here. Instead the pure norm is used for the smoothness terms (4) and the fidelity terms are replaced by a sum of absolute values of the constraints (3).

As pointed out by Chan et al. in [14], (6) is strongly related to the Mumford-Shah functional [4] and can be written aswhere denotes the perimeter of the set .

Remark 1. One can show that a global minimizer of can be found by solving the convexified problem and finally setting for almost every with (see Proposition  1 in [16]). In fact, this holds for any data terms and that are measurable.

Finally, we obtain the aimed displacement field by setting .

In Figure 1 we illustrate our method by an example. The motion segmentation function splits the displacement field into the two displacement fields and .

3. Optimisation

3.1. Iterative Scheme

To facilitate the optimisation procedure we replace the fidelity term in (3) by its linearised versionwherewith fixed.

The minimization of the energy functional (6) with respect to , , and is then performed by the following iterative scheme:(1)For fixed and , solve(2)For fixed , solve(3)For fixed , solveAlthough problem (10) is convex, it is important to note that the overall minimization of the energy functional (6) is a nonconvex optimisation problem. For the minimization of nonconvex energy functionals there exist convex relaxation methods, which are able to provide solutions close to a global minimum. Recently, Strekalovskiy et al. [22] proposed such a method for nonconvex vector-valued labelling problems. In this paper however we will not make use of these kinds of methods.

Note that, compared to our previous work [16], this time we did not introduce an auxiliary in the energy functional. Therefore our iterative scheme consists of only steps instead of . The reason for this change is that we intend to use a different numerical approach to solve our problem. More precisely, to solve subproblems (10), (11), and (12) in a fast and efficient way, we follow a primal-dual approach as described by Chambolle and Pock in [17]. We therefore recapitulate in the next section the basic notations and formulations.

3.2. Basic Framework for the Primal-Dual Approach of Chambolle and Pock

First, we define by and two finite-dimensional real vector spaces. Their inner products are denoted by and , respectively, and their induced norms are given by and , respectively. The general nonlinear primal problem we consider is of the formwhere and are proper, convex, and lower semicontinuous and the map is a continuous linear operator. The corresponding primal-dual formulation of (13) is the saddle-point problemwith being the convex conjugate of . We assume that the problems above have at least one solution and therefore the following holds: where and are the subdifferentials of the convex functions at and at . Furthermore we assume that and are “simple”; that is, the resolvent operators and are easy to compute. For a convex function the resolvent of the operator at can be calculated in our case byIn this paper we will only make use of Algorithm  1 in [17] with the extrapolation parameter . Although interesting, the usage of the other proposed algorithms is left for the moment for later research.

To apply Algorithm  1 in [17] to the minimization problems (10), (11), and (12), we first need to rewrite them in their discretized version, then identify the functions and , and finally derive the resolvent operators and .

For the discrete setting we therefore define by the pixel positions in the image domain with being the spatial step size. For the calculations of the finite differences, the discrete divergence operator, the discretized inner products, and further details, we refer the reader to [17] and the references therein.

In the following sections we will derive the resolvent operators for the three given minimization problems (10), (11), and (12).

3.3. Resolvent Operators for Problem (10)

Let us consider the continuous problem (10). As mentioned already before, we need to rewrite the problem in its discretized version to be able to apply a primal-dual approach for the minimization. We use the spaces and and their inner products:Using the rectangle rule we can rewrite problem (10) in the discretized versionwhere the factor can be neglected, since it has no influence on the optimal solution. The norm , with , is defined by , where .

Comparing (20) to (13), we see that , , and . The solution of the resolvent operator with respect to can be derived as and the one with respect to as

Remark 2. The primal-dual formulation of problem (20) is a special case of the general problem considered in Pock et al.’s work [23] and the resulting primal-dual algorithm is then the same as described there. Namely, the dual variable is projected onto the convex set, a disc with radius , and with a truncation of the primal variable the projection on the feasible convex set is achieved.

Remark 3. Observant readers probably noticed that we slightly misused the mathematical notation to make the formulas more pleasing to the eye. More precisely, instead of writing we should have been using , since discretization is performed after the function is applied. In the following we will also make use of this sloppy notation for the functions , , and given in (9).

3.4. Resolvent Operators for Problem (11)

Now we consider the continuous problem (11). If we have a closer look at this minimization problem, we see that we only receive information about on the domain where will be set to . Although theoretically reasonable, for the numerical calculations the implementation gets facilitated by having a smooth extension of to the domain . We therefore consider instead the problemComparing (11) to (23) the only difference is that the factor is not applied to the smoothness term anymore. For the primal-dual approach we will use this time the spaces and . The inner product of is the same as in (19) and the one for is defined byAfter the discretization of problem (23) we obtainwhere this time the norm , with , is defined by with .

Comparing (25) to (13), we see that the corresponding functions are , , andFrom the resolvent operator with respect to we obtainThe derivation of the resolvent operator with respect to is not that straightforward and more effort has to be put in to find a suitable solution. It is common that the resolvent operators of functions, which are sums of quadratic and absolute norms, lead to so-called thresholding schemes [24, 25]. This will be also the case here. Having a closer look at the definition of (26) and (16), we see that we have to solveFor we can conclude from (28) that . On the other hand, for we have to distinguish the caseswhich turn out to be in total. The chosen numbering for the different cases is shown in Table 1.

3.4.1. Geometric Interpretation of Problem (28) with

Before explaining the derivation of the explicit solutions and the needed reformulation of the conditions for the different cases in (29) in more detail, we want to give a deeper insight of the geometric interpretation of problem (28) with . To facilitate the notation in the following, we introduce the termsUsing the definitions in (9) and the notation in (30), we can rewrite , , and as Note that with a line is defined with its normal vector being parallel to the vector . Similarly, defines a line and a line .

Now we can argue similar to Zach et al. in [25] for the geometric interpretation. The mathematical structure of their minimization problem for which they derive the thresholding scheme is very similar to the one we have in (28) with . The first term in (28), , is the squared distance of to , and the terms , , and define the unsigned distances to the lines , , and , respectively. Considering now all with a fixed distance to , we see that problem (28) with is minimized for closest to the three lines , , and . See Figure 2 for an illustration.

3.4.2. Derivation of the Explicit Solutions for Problem (28) with

We are ready now to derive for each of the cases an explicit solution by using (28). We show the derivation of the solutions only for four different cases. The explicit solutions for the remaining cases can be calculated in a similar fashion as the presented ones.

Case  1 and Similar Ones. Let us consider the first case in Table 1; that is,Solving problem (28) for this case leads to the equation and by using the notation in (31) the explicit solution can then be written asThe derivation of the explicit solutions for cases , , , , , , and in Table 1 is performed similarly. There are however other cases left which need another treatment, as, for example, case number .

Case  3 and Similar Ones. The condition for case in Table 1 is given by Here we have to distinguish the two situations and . (i) If , we have the following.

From we see that the solution has to lie on the line and of course the distance of it to the other lines and should be minimal. Therefore we can assume that the solution is of the formwhere is the normal to . Geometrically this means that is first orthogonally projected to the line and afterwards moved along the line such that it minimizes the distance to and . The unknown is determined by replacing the term in the equation using (37), which leads toSolving problem (28) for the current case and replacing again the term using (37) lead after some simple calculations to the second parameter (ii) If , we have the following.

Since vanishes, there is also no line that we have to consider and we can directly derive the solution by just solving (28), similar to the first case in Table 1, and we get the solution Finally, we can derive the explicit solution for cases , , , , , , , , , , and in Table 1 similar to case above.

Case  15 and Similar Ones. Let us now consider case number , which was not covered so far. The condition is this time given by and we see that we have now two equal signs, namely, for and . This time we have to distinguish the situations , , and . (i) If , we have the following.

From and we see that the solution has to lie on the lines and and should have a minimal distance to the line . Thus, we assume that the explicit solution is of the formSimilar to the last paragraph, replacing the term in the equation by making use of (42) leads to the same as in (38). For the parameter we getThe case where in (43), which is equivalent to , indicates that the lines and are not parallel and that they therefore intersect each other in one point. The solution we look for will be at this intersection and the corresponding is calculated by replacing the term in the equation using (42). For the case where in (43), which is equivalent to , we conclude that the lines and are parallel. This means that if the solution lies on the line it necessarily also lies on the line and the parameter is calculated by solving problem (28) for the current case and replacing the term using (42).(ii) If , we have the following.

Since vanishes, there is no need to consider the line and we focus instead on the line on which the solution has to lie too. We therefore assume the solution has the formReplacing this time the term in the equation with the help of (44) results inAfter solving problem (28) and replacing the term using (44) we get (iii) If , we have the following.

This time both and vanish and the corresponding lines and do not play a role in the derivation of the solution anymore. We can directly solve problem (28) without any prior assumption, similarly as done before in other cases, and we get The derivation of the explicit solutions for cases , , , , and in Table 1 is done similarly to case explained above.

Case  21. There is now one case left, which was not discussed so far. The corresponding condition from Table 1 is This time we have three equal signs and we have to distinguish the situations , , , and . (i) If , we have the following.

Since , , and , we see that the solution has to lie this time on all three lines , , and . For we assume that the explicit solution is of the formLike in the paragraphs before, replacing the term in by making use of (49) leads to being equal to the one in (38). To determine in (49), we argue as follows: (a) If , which is equivalent to , we use (49) in and get (b) If and , that is, and , we use (49) in and get (c) If and , this means that and . Thus all the three lines , , and are parallel and by solving problem (28) and using (49) we get (ii) If , we have the following.

Similarly as in the discussion of (44) in case , we assume the solution is of the formReplacing the term in by making use of (53) leads to being equal to the one in (45). For in (53), we argue as follows: (a) If , which is equivalent to , we make use of (53) in and get (b) If , which means , we get by solving (28) and using (53) (iii) If , we have the following.

Here, and vanish and we focus therefore on the line on which the solution has to lie. We assume the explicit solution is of the formwhere is calculated by using (56) to replace in and is given by (iv) If , we have the following.

In this situation we do not have to consider the lines , , and at all and solving (28) results in

3.4.3. Reformulation of the Conditions for Problem (28) with

Since is unknown but in (9) is known, we have to rewrite the conditions for the cases, which are defined by (29), in terms of , , and . This can be done by incorporating the derived explicit solutions and finding proper partitioning of the space spanned by , , and (see Figure 3). A similar approach was used in the work of Zach et al. [25] to reformulate the conditions for their proposed thresholding scheme.

As one can see in Figure 3, we indicated a cube in the coordinate system that will facilitate for us the problem of finding proper partitioning of the considered space. The edge length of the cube is chosen in such a way that the critical values of the reformulated conditions are covered. The reader will understand the meaning of this sentence afterwards in a better way. For later need, we number the faces of the cube as shown in Figure 4.

The orientation of the face numbers in Figure 4 indicates from which direction we will look at the faces.

In the following we want to show how we can rewrite the conditions for the different cases. The same four groups of cases which were discussed in Section 3.4.2 are considered here too and for each of them the reformulation of the conditions can be achieved in a similar fashion.

Case  1 and Similar Ones. We start again with the first case in Table 1, which belongs to the group of cases that can be handled in the easiest way compared to the other ones. The condition for this case is given in (33) and by using (35) to replace the term in the condition we get or rewrittenOne of the critical values we mentioned before concerning the edge length of the cube shown in Figure 3 can be determined now for case number , namely, by the values on the right-hand side in (60).

The reformulation of the conditions for cases , , , , , , and in Table 1 is achieved similarly to this case and the remaining seven critical values, which are necessary to define the edge length of the cube, are determined in the same way as above.

Remark 4. Before jumping to the next group of cases, we want to explain why the cube in Figure 3 will help us in finding proper partitioning of the space spanned by , , and . Our aim is to divide the space into parts, each one defining a region for which exactly one of the cases may apply with their already derived explicit solutions. This partitioning should not be done just arbitrarily and has to be meaningful and to conform to the already set up formulations. Before, we mentioned once that we want the eight critical values from the paragraph before to be covered by the cube. The reason for this is that like this it is possible to reduce the partitioning problem of the whole space to a partitioning problem of the cube. The uniquely defined regions of the cube are the ones from the corresponding eight cases discussed in the paragraph above and they define the corners of it. In Figure 5 we show the location of the regions for different case numbers. For simplicity we drew all the regions with the same size, although they can and most likely will have different sizes. Note that the region for case number is not visible in the figure, because it fully lies in the interior of the cube.
Let us focus now on one face of the cube. We know the values for the boundaries of the corner regions from the critical values we got from the previous paragraph. They are fixed and we introduce for them the names , , , , , , , and as shown in Figure 6.
For face number , for example, the values and will be the boundary values to region (see Figure 5) and after having a look at (60) we can conclude that and .
For each face it always holds that For example, to see that we consider the reformulation of the condition for case number :which can be derived, as mentioned before, similarly to case number . For face number we see in the same manner as before by having a look at Figure 5 and (62) that . Since (see (31)), we can conclude that .
Let us consider again an arbitrary face of the cube. Although we have boundaries for the corner regions, the ones for the regions in between are not fully defined. Since a rectangular division of the face seems to be adequate, we investigate the problem on how to achieve such one. In Figure 7 a face is depicted with its corner regions and two ways of possible divisions into rectangles are shown, which we call the “clockwise” and “anticlockwise” division method.
Depending on the values for , , , , , , , and sometimes both the clockwise and the anticlockwise division method work, as, for example, in Figure 7. Depending on the situation however, there can be also values for which only the clockwise or the anticlockwise or even none of both methods will work, where in the latter situation another adequate division method has to be defined.
In Table 2 we list the different possible situations for the values , , , , , , , and .
Furthermore, we split the last situation S9 into its parts for later use and introduce the corresponding labelling in Table 3.
In the following, we go through the different situations given in Tables 2 and 3 and decide which kind of method can be used to achieve a division of the face into rectangular regions: (i)Situation S1 in Table 2 is illustrated in Figure 8 for face number and the corresponding region numbers. For this situation, neither the clockwise nor the anticlockwise division method will work, since the corner regions and overlap. An adapted division method, which is depicted in Figure 8, will be used instead for this situation. Note that in this situation certain corner regions, which were defined through the reformulation of the conditions for the group of cases similar to case , have to be adapted by removing from them the region where they overlap.(ii)For the situations S2, S3, S5, and S6 in Table 2 and S9.1, S9.2, S9.3, S9.4, and S9.5 in Table 3 we can use the clockwise division method illustrated in Figure 7. We define the set of situations for which the clockwise division method can be applied by (iii)Situation S9.7 in Table 3 needs again a special treatment to achieve a meaningful division, since neither the clockwise nor the anticlockwise division method can be applied because of the overlapping of the corner regions and , if we consider face number , for example. The adapted division method, which will be used instead, is shown in Figure 9 for face number and the corresponding region numbers. Note again that also in this situation certain corner regions have to be adapted by removing from them the region where they overlap.(iv)Finally, for the situations S4, S7, and S8 in Table 2 and S9.6, S9.8, and S9.9 in Table 3 we can use this time the anticlockwise division method illustrated in Figure 7. The set of situations for which the anticlockwise division method can be applied is then given byWe are now able to give an idea on how to reformulate the conditions for the remaining group of cases.

Case  3 and Similar Ones. The reformulation of the cases in this group with the same approach as for case number will not lead to proper partitioning of the space shown in Figure 3. Instead we will use a more geometrically based approach which depends on the different situations the boundary values can take.

First we can derive for sure, independently of which approach we use, that This can be seen by considering Figure 5, from which we can deduce that region has to lie between regions and . By using the critical values of these adjacent regions, which can be determined from (60) and (62), we arrive at the inequality above.

For the reformulation of the condition with respect to , we consider the different situations in which the boundary values of the corner regions can be for face number .(i)If the boundary values are in situation S1 given in Table 2, the reformulation is given by (ii)If the boundary values are in one of the situations in the set defined in (63) or in situation S9.7 in Table 3, we get (iii)If the boundary values are in one of the situations in (64), the reformulation is For the reformulation of the condition with respect to , we consider this time the different situations belonging to face number .(i)If the boundary values are in situation S1, the reformulation is given by (ii)If the situation of the boundary values is in the set or corresponds to the situation S9.7, we get (iii)Finally, if the situation of the boundary values is in , the reformulation is Finally, the reformulation of the conditions for cases , , , , , , , , , , and in Table 1 is achieved similarly to this case.

Case  15 and Similar Ones. For this group of cases we will again make use of the different situations of the boundary values, but this time only for face number .

For the reformulation of the condition with respect to , we get the following list:(i)If the boundary values are in situation S1 (ii)If the situation of the boundary values is in the set (iii)If the boundary values are in situation S9.7 (iv)If the situation of the boundary values is in the set

For the reformulation of the condition with respect to , we get the following list:(i)If the boundary values are in situation S1 (ii)If the situation of the boundary values is in the set (iii)If the boundary values are in situation S9.7 (iv)If the situation of the boundary values is in the set Finally, we get for the reformulation of the condition with respect to the following list:(i)If the boundary values are in situation S1 (ii)If the situation of the boundary values is in the set or (iii)If the boundary values are in situation S9.7 The reformulation of the conditions for cases , , , , and in Table 1 is done then similarly to case explained above.

Case  21. The last case which is left is case number . The corresponding region contains the part of the cube which was not covered so far with the corresponding regions of the other cases. Additionally, since it is unfortunately also possible that there are overlappings for some of the regions which appear at different faces, as, for example, the possible overlapping of region with region , these overlappings are also assigned to case number to finally achieve proper partitioning.

One could get the impression that this handling of the overlappings is mathematically rather grubby, but to legitimate this we recall that an optimal solution of problem (28) will result in rather small values for , , and and therefore should be ideally close to case in Table 1. Extending therefore the region for case number should not be harmful. Furthermore, for small values of , , and a reformulation of the condition can lead more likely to overlappings of certain region parts. Therefore it seems to be adequate to assign these overlappings to region .

It is possible to define region in mathematical terms, but since we explained the content of this region already above this will be rather uninspiring and for the implementation an explicit formulation of the region is also not needed, since it can easily be depicted with the help of the other regions.

3.5. Resolvent Operators for Problem (12)

This section is very similar to Section 3.4 and we mainly have to just replace by and by . To achieve a smooth extension of to the domain , we consider now instead of problem (12) the following problem:The spaces and and their inner products are defined in the same way as in Section 3.4 and the discretization of (83) is performed in the same manner as before.

The resolvent operator with respect to is identical to (27). Finally, compared to the section before, there are only very slight changes of the resolvent operator with respect to .

4. Implementation and Results

Although we have a convex minimization problem with respect to the motion segmentation function in (10) and linearised the fidelity term in (8), we should keep in mind that the overall minimization problem remains nonconvex and that we have to update and regularly. Therefore a coarse-to-fine strategy is applied to avoid the risk of getting stuck in a local minimum during the optimisation. The minimal size of the images at the coarsest level is set to and the scaling factor for the pyramid to . At the coarsest level we initialise the displacement fields and trivially with .

Because of Remark 1 in Section 2.2 one would expect that the choice of the initialisation for the function can be arbitrary. Indeed, whatever initialisation we choose, a global minimizer for the motion segmentation can be found for fixed and . But in our case the values for and are updated regularly during optimisation.

The final iteration scheme consists of two loops. The outer loop iterates over the pyramid levels. In each level an inner loop updates the values for , , and , following steps (1)–(3) in Section 3.1. Thus, in each iteration of this inner loop one step of the primal-dual Algorithm 1 from [17] is performed with the appropriate resolvent operators from the previous sections to update , , and . To achieve a better convergence of we update and only every 50th iteration. The motion segmentation is obtained by choosing and setting (see Remark 1 in Section 2.2).

As soon as the finest level is reached, the inner loop is executed until a certain tolerance or the maximum number of iterations is reached. The final displacement field is then obtained by settingBicubic interpolation is used to calculate the images during the iterations and to obtain the final registered image .

In Figure 10, we show the registration result of a synthetic example, where a textured circle is moving down diagonally. Although our energy functional is convex with respect to the motion segmentation function , the overall minimization task remains a nonconvex problem. To decrease the influence of the nonconvexity during the minimization procedure, certain terms are linearised and a coarse-to-fine strategy is applied. Experiments for the synthetic example in Figure 10 show that due to this workaround different initialisation for could be used to achieve similar results, as shown in Figure 11. In order to show the effect of the smoothness parameter for the TV norm, we show in Figure 12 that we can obtain smoother results as increases.

In order to show the performance of the proposed method, we show qualitative and quantitative results. Since we are interested in the discontinuities of the displacement field, we compare the proposed method to methods that are able to preserve discontinuities in the displacement field, in this case, the demon algorithm with anisotropic diffusion filtering [26], the registration algorithm of Brox et al. [8], and our previous work [16].

We show the qualitative comparison in Figure 13. As clearly seen, the proposed method can achieve more crisp discontinuities in the displacement field compared to the demon algorithm with anisotropic diffusion filtering and the registration algorithm of Brox et al. Furthermore, the proposed method managed nicely to separate the motion of the abdominal wall and the one of the organs.

In Figure 14 a quantitative evaluation is shown for different liver image pairs from the sequences of [3]. We chose the parameters for all the methods by optimising them with respect to these image pairs. The parameters of our method were set to , , , and . For the demon algorithm with anisotropic diffusion filtering we could use the suggested parameters, for Brox et al.’s method we used , , and , and finally for our previous method we used the parameters suggested there, namely, , , , , and .

To quantitatively assess the performance of the four methods, we calculated the mean squared error (MSE) and the normalised mutual information (NMI), with the grey values scaled from to . For all our examples the proposed method performed better than the demon algorithm with anisotropic diffusion and the registration algorithm of Brox et al. We used the Kolmogorov-Smirnov test to check for normality of the results using the R Software package (Version 2.10.1). We considered a significance level of 5% as significant. The -test showed that the proposed method delivered significantly better results than the demon algorithm with anisotropic diffusion filtering and the method of Brox et al. with both . There was however no significant difference in the performance of the proposed method compared to our previous method [16] for the MSE and NMI.

In a next step we compared the running times of the proposed method to the ones of our previous work [16] for the liver image pairs. Both methods were implemented in MATLAB and the experiments were performed on a 64-bit Linux system with 1.2 GHz. We used again the parameters , , , and for the proposed method and for our previous work the parameters , , , , and . In [16] the displacement fields and were updated every 10th iteration. To provide a fair comparison, we therefore used the same update frequency for the proposed new method. Furthermore, the maximum number of iterations in the finest level is set to for both methods. The timing results for the liver image pairs are shown in Table 4 together with the mean and standard deviation. The running times of both methods are comparable and the proposed new method performed with around  s slightly faster than the old method. The old method makes use of some optimised in-built MATLAB routines, whereas for the proposed method there is still a high potential to optimise the code.

5. Conclusion

In this paper we presented a primal-dual method for discontinuity preserving nonrigid registration that makes use of the continuous cuts framework. The so-gained motion segmentation influences the motion estimation positively by sharpening the discontinuities in the displacement field. The minimization of the energy functional was implemented in a coarse-to-fine strategy and exploits the rapidity of the primal-dual algorithm studied in [17]. The experimental results demonstrated desirable performance of the proposed method in comparison with those of the demon algorithm with anisotropic diffusion filtering [26] and the registration algorithm of Brox et al. [8].

Large displacements of deforming organs can cause misregistrations even when using a coarse-to-fine approach. Brox and Malik proposed in a very recent publication [27] to include point correspondences from descriptor matching into the variational optical flow formulation. In future work we plan on similarly including known corresponding landmarks into the functional as hard constraints as we have preliminarily done in [28]. However, we plan on using our Tracking the Invisible approach [29] for locating the matching point correspondences.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was funded by a donation of Dr. Hansjörg Wyss.