A Convex Relaxation Bound for Subgraph Isomorphism
In this work a convex relaxation of a subgraph isomorphism problem is proposed, which leads to a new lower bound that can provide a proof that a subgraph isomorphism between two graphs can not be found. The bound is based on a semidefinite programming relaxation of a combinatorial optimisation formulation for subgraph isomorphism and is explained in detail. We consider subgraph isomorphism problem instances of simple graphs which means that only the structural information of the two graphs is exploited and other information that might be available (e.g., node positions) is ignored. The bound is based on the fact that a subgraph isomorphism always leads to zero as lowest possible optimal objective value in the combinatorial problem formulation. Therefore, for problem instances with a lower bound that is larger than zero this represents a proof that a subgraph isomorphism can not exist. But note that conversely, a negative lower bound does not imply that a subgraph isomorphism must be present and only indicates that a subgraph isomorphism can not be excluded. In addition, the relation of our approach and the reformulation of the largest common subgraph problem into a maximum clique problem is discussed.
The subgraph isomorphism problem is a well-known combinatorial optimization problem and often involves the problem of finding the appropriate matching too. It is also of particular interest in computer vision where it can be exploited to recognise objects. For example, if an object in an image is represented by a graph, the object could be identified as subgraph within a possibly larger scene graph. Several approaches have been proposed to tackle the subgraph isomorphism problem and we refer to a few [1–4] and their references therein.
Error-correcting graph matching —also known as error-tolerant graph matching—is a general approach to calculate an assignment between the nodes of two graphs. It is based on the minimisation of graph edit costs which result from some predefined edit operations when one graph is turned exactly into the other. Commonly introduced graph edit operations are deletion, insertion, and substitution of nodes and edges. Each graph-edit operation has a cost assigned to it which is application dependent. The minimal graph edit cost defines the so-called edit distance between two graphs. The idea to define such a distance for graph matching goes back to Sanfeliu and Fu  in 1983. Before that, the edit distance was mainly used for string matching. Several approaches for error correcting graph matching have been proposed that are based on different methods like tree search , genetic algorithms , and others (see e.g., ) to name a few. In this paper we propose and apply a semidefinite programming (SDP) relaxation to a quadratic optimization formulation for subgraph isomorphism. In combinatorial optimization, semidefinite programming represents a valuable approach to approximate NP-hard problems and to obtain bounds for these problems. The current increase of interest in semidefinite programming is largely driven by the successful extension of interior point algorithms for linear programming to semidefinite programming (see e.g., ). A well-known example for semidefinite programming is the MAX-CUT approximation by Goemans and Williamson . With the increase of the computational power of computers, SDP turned out to be useful for an increasing range of real world problems. For example, it was applied to several problems in the field of computer vision including segmentation/partitioning, grouping, restoration  matching , graph seriation , and camera calibration .
2. Contribution and Aim of the Paper
The main contribution of this paper lies in the convex relaxation of a subgraph isomorphism problem and the identification of a lower bound for this optimization problem. The computation of that bound is based on the SDP approximation of a combinatorial optimization formulation for subgraph isomorphism. The combinatorial optimization formulation and its convex relaxation is explained in detail. The approach is designed to find a subgraph isomorphism which maps the entire node-set of the possibly smaller graph to a subset of nodes in the second graph. We also discuss an interesting relation to an approach that is based on a reformulation of the largest common subgraph problem into a largest clique problem [14, 15].
3. Organisation of the Paper
After providing the notation we use, we introduce a combinatorial quadratic optimization formulation for the subgraph isomorphism problem that can be interpreted as an error-correcting graph matching approach.
The integer optimization problem we end up with is generally an indefinite quadratic integer optimization problem which is known to be NP-hard  as Pardalos and Vavasis showed that indefinite quadratic programs are NP-hard problems (see ). Then we explain in detail a convex SDP relaxation of the combinatorial problem that leads to a lower bound for the subgraph isomorphism problem. The bound can be computed with standard methods for semidefinite programs (see, e.g., [18–20]). Finally, our experiments show that the bound can be tight enough to prove that no subgraph isomorphism between two graphs can be found.
In this paper, we consider simple graphs with nodes and edges . We denote the first (possibly smaller) graph with and the second graph with . The corresponding sets and contain and nodes, respectively. We assume that , which is in fact no constraint as we can always choose to represent the larger graph. We make extensive use of the direct product , which is also known as Kronecker product . It is the product of every matrix element of with the whole matrix resulting in the larger matrix .
A subgraph isomorphism is a mapping of all nodes in the graph to a subset of with nodes of the graph such that the structure is preserved. That means that any two nodes and from that are adjacent must be mapped to nodes and in that are adjacent too. If the nodes and in are not adjacent they must be mapped to nonadjacent nodes in . The same has to be true for the inverse mapping which maps the nodes of the subgraph to the nodes of .
5. Combinatorial Objective Function
In this section we propose and show the correctness of a combinatorial problem formulation for finding a subgraph isomorphism. The general idea is to find a bipartite matching between the set of nodes from the smaller graph to a subset of nodes of the larger graph. The matching is evaluated by an objective function that can be interpreted as a comparison of the structure between all possible node pairs in the first graph and the structure of the node pairs (to which the nodes are matched) in the second graph. A matching that leads to no structural differences has no costs and represents a subgraph isomorphism. Mathematically, the evaluation can be formulated as a quadratic objective function , where resents a mapping and contains the problem data of the subgraph isomorphism problem. The full task of finding a subgraph isomorphism can be stated as the following combinatorial quadratic optimization problem, which details are explained below:
The constraints make use of the matrices and and ensure that the vector is a binary 0,1-indicator vector which represents a bipartite matching between the two node sets of the graphs such that each node in graph has a single partner in the graph . Here represents a vector with all elements 1. For our purposes, the elements of the indicator vector are arranged in the following order:
Using double indices a nonzero vector element indicates that the node of the first set of nodes is matched to the node in the second set and otherwise . We illustrate such an indicator vector in Figure 1 where a bipartite matching between two small sets of nodes and the corresponding indicator vector is shown. Note that for and the corresponding constraint matrices become
The relational structure matrix which contains the structural information of the graphs and appears within the objective function of the optimization problem (5.1) can be written in a form that exploits the Kronecker product.
Definition 5.1. Relational Structure Matrix: Here and are the 0,1-adjacency matrices of the two graphs and . The matrices and represent the complementary adjacency matrices which are computed as the following.
Definition 5.2. Complementary Adjacency Matrices
Here are matrices with all elements equal to one and denotes the unit matrix. The complementary adjacency matrices have elements if the corresponding nodes and are not directly connected in the graph. To illustrate that, the adjacency matrix for a small graph along with its complementary adjacency matrix are shown in Figure 2. There, the matrices are represented as a binary image (0 black, 1 white).
We show this as similar representations will be used to illustrate some particular matrices that appear later in this paper.
5.1. Subgraph Isomorphism
In the following, we show that a 0,1-solution vector of the optimization problem (5.1) that has an optimal objective value of zero represents a subgraph isomorphism. The matrix is defined by (5.4). We first show that zero is the smallest possible value of the integer optimization problem. Then we show that every deviation from a subgraph isomorphism results in an objective value larger than 0, meaning that an objective value of zero represents a subgraph isomorphism.
Proposition 5.3. The minimal value of the combinatorial optimization problem (5.1) is zero.
Proof. The elements of and are all nonnegative. In fact all their elements are either zero or one. Therefore, the lowest possible value of the quadratic cost term which can be rewritten as the following sum: is zero.
Proposition 5.4. A solution with the minimal value of zero of the quadratic optimization problem (5.1) represents a subgraph isomorphism.
In order to prove this we look closer at the term within the sum of (5.6) and show that it leads to a cost larger than 0 only if the considered matching violates the condition for a subgraph isomorphism.
Proof. Due to the constraints, represents a bipartite matching and only if the product is equal to one can the term within the sum (5.6) be different from zero, and has to be considered in detail. We refer to also as structure comparison term. There are the following two cases that lead to in the sum (5.6).
Case A. The node and node refer to the same node in meaning that . As the diagonals of and are zero, one finds that and . Then the term is always equal to zero and does not contribute to the sum.
Case B. The nodes and in refer to different nodes in (). Due to the bipartite matching constraint, a value represents the situation and , where the nodes and in are mapped to two different nodes, and , in the second graph respectively. Then four possible cases for the structure comparison term exist which result in a cost of either zero or one. These subcases are considered separately below.
The subcases we have to consider in B include all four possible structural configurations between the two pairs of nodes ( and ) and are listed in Table 1. The two mappings which do not preserve the structure, therefore violating the isomorphism conditions, result in a cost of one. However, these four cases (I–IV) are described in more detail below and we will see that a cost is added for every mapping that results in a difference between the structure of graph and the considered subgraph of the second graph .
(I)If the two nodes and in the first graph are neighbours, , then no cost is added in (5.6) if the nodes and in the second graph are neighbours too: .(II)Otherwise, if and are neighbours in , , and the corresponding nodes and are no direct neighbours in the second graph, , then a cost of 1 is added.
The mappings which correspond to configuration case I and II are visualised in Figure 3.
(III)Analogously, the structure comparison term penalises assignments where pairs of nodes ( and ) in the graph become neighbours ( and ) in the second graph which were not adjacent before. (IV)Finally, if and are not adjacent in the first graph and the nodes and in are also not adjacent, no cost is added.
Figure 4 illustrates the situations III and IV in detail.
The itemisation of these four possible cases shows that only mappings that lead to a change in the structure are penalised with a cost. Structure preserving mappings which are compatible with a subgraph isomorphism are without costs and if all mappings are structure preserving it represents a subgraph isomorphism.
Note that due to the symmetry of the adjacency matrices the quadratic cost term is symmetric and every difference in the compared structures of the two graphs is considered twice, resulting actually in a cost of 2 for every difference in the structure.
Finally, the sum (5.6) and therefore the objective function evaluate the full matching encoded in . And only for matchings which lead to no difference in the mapped substructure—and vice versa—all the terms within the sum (5.6) are zero. In this case, the bipartite matching, represented by the vector , is a subgraph isomorphism.
(a) I: Good assignment (no costs)
(b) II: Bad assignment (costly)
(a) III: Bad assignment (costly)
(b) IV: Good assignment (no costs)
We wish to emphasise that the minimisation of (5.1) represents the search for a bipartite matching which has the smallest possible structural deviation between and the considered subgraph of . The optimization problem (5.1) can therefore be seen as a graph edit distance with a cost of 2 for each addition or removal of an edge that is needed to turn the first graph into the considered subgraph of the second graph .
6. Example Problem
In the next section we discuss in detail how we obtain the semidefinite relaxation of (5.1). In order to ease the reproduction of our approach, we illustrate some occurring details based on the particular subgraph isomorphism problem that is shown in Figure 5. We will also be able to conclude from a lower bound larger than 0 that for this problem instance no subgraph isomorphism can be found.
7. Convex Problem Relaxation
In the following we explain the convex relaxation of the combinatorial isomorphism approach (5.1). It will be relaxed to a (convex) semidefinite program (SDP) which has the following standard form:
The constraint means that the matrix has to be positive semidefinite. This convex optimization problem can be solved with standard methods like interior point algorithms (see, e.g., ). Note that the solution of the convex relaxation (7.1) provides a lower bound to the combinatorial optimization problem (5.1). A lower bound with a value shows that no subgraph isomorphism can be present as the objective value of (5.1), for instance with subgraph isomorphism is zero. Below, we describe in detail how we derive such a semidefinite program from (5.1).
7.1. SDP Objective Function
In order to obtain an appropriate SDP relaxation for the combinatorial subgraph matching problem, we start with the reformulation of the objective function of (5.1)
Here we exploited the invariance of the trace operator under cyclic exchange. Besides being symmetric, the matrix is positive semidefinite and has rank 1. The objective function is relaxed by dropping the rank 1 condition of and leaving the constraint untouched. This makes the set of feasible matrices convex (see, e.g., ) and lifts the original problem (5.1) defined in a vector space with dimension into the space of symmetric, positive semidefinite matrices with the dimension . We note that the diagonal elements of the relaxed reflect the approximation of the vector which we are searching for as is true for 0,1-binary values. We further remark that the computed lower bound in (7.1) can be negative although is constraint to be positive semidefinite.
The relational structure matrix is a binary matrix and the particular matrix which represents the example subgraph isomorphism problem introduced in Section 6 is depicted in Figure 6. One easily detect the patterns resulting from the Kronecker product formulation in (5.4). On the fine scale, the pattern of the adjacency matrix of the graph can be seen. The adjacency matrix of the graph is present on a larger scale (compare the adjacency matrices in Figure 5).
7.2. SDP Constraints
In the convex relaxation (7.1), we have to incorporate constraints in the form , where and are matrices and is a scalar value. The aim is to approximate the original constraints as good as possible in this way. Below we describe four different types (a–d) of constraint matrices along with their corresponding values for that we used to tighten the relaxation and to approximate the bipartite matching constraints within the standard SDP formulation (7.1).
(a) In order to get a tight relaxation, we exploit the fact that holds true for 0,1-variables by introducing an additional first row and column in the problem matrix resulting in matrices and with the dimension increased by one compared to and . In the additional elements were set to zero in order not to change the value of the objective function. The elements of the new row and the new column in are forced to be equal to the corresponding diagonal elements by using the following constraint matrices , which are elements that can be defined by:
Here we make use of the Kronecker delta which is one if and zero otherwise. Note that the Kronecker delta representation allows for an easy creation of the matrices in any computer programming language. Here each constraint matrix has only 3 nonzero elements. The corresponding constraint variables are all zero.
(b) We restrict the first element in the matrix to , using a single additional constraint matrix whose elements can be expressed as
The matrix has only as nonzero element and the corresponding constraint variable is 1.
(c) The equality constraints , , which are part of the bipartite matching constraints () represent the constraint that each node of the smaller graph is mapped to exactly one node of the scene graph. To model these, we define constraint matrices , , which ensure (taking the chosen order of the elements into account) that the sum of the appropriate partition of the diagonal elements in is 1. Note that we operate on the diagonal elements of as they reflect the vector . The matrix elements for the th constraint matrix can be expressed as follows:
This defines matrices where the appropriate partition with consecutive elements on the diagonal are set to one. All other elements are zero. For these constraints the corresponding values of are all 1.
(d) All integer solutions , where represents a bipartite matching, have zero values at those matrix elements where the matrix has nonzero elements. In order to approximate the bipartite matching constraints we want to force the corresponding elements in (the enlarged) matrix to be zero. The matrices have all elements 1 and represent the unit matrices. We can force the elements to be zero with the help of the constraint matrices which we denote , and that are determined by the indices , and . They have the following matrix elements: with .
The indices and attain all valid combinations of the following triples where and :
Even if the number of indices is high the structure of a single matrix is fairly simple as every matrix has only two nonzero elements. For all these constraints the corresponding constants have to be zero. With this we get additional constraints that ensure zero values within the matrix , where the corresponding elements in (enlarged by one dimension compared to ) have nonzero values. Therefore is valid. Note that each constraint forces two elements in the solution matrix to be zero. The black pixels in the matrix shown in Figure 7 indicate all the elements that are forced to be zero for the example problem with and .
Altogether this sums up to constraints of the form within the convex semidefinite problem formulation (7.1). By solving the convex relaxation (7.1) we get a lower bound to the combinatorial optimization problem (5.1).
7.3. Computational Effort
The most computational effort within the SDP approach is needed for the computation of the solution of the SDP relaxation (7.1). We used external SDP solvers for this task. An independent benchmarking for several SDP solvers can be found in . A comparison of three SDP solvers (DSDP , PENSDP  and CSDP ) on the basis of our own data is shown in Table 2. There we compared the mean computation time needed to solve the SDP relaxations for random problems of the size and . The comparison was made on a 2.8 GHz PC with 2 GB storage. We found that the CSDP-solver fits best our needs.
The CSDP-solver is a variant of the predictor corrector algorithm suggested by Helmberg et al.  and is described in detail in . Below we sketch the storage requirement and the computational time for this solver.
7.4. Storage Requirement
In  the author points out that the CSDP-solver requires approximately bytes of storage. Here is the number of constrains and are the sizes of block diagonal matrices used to describe the problem and solution matrices of the semidefinite program. In our case we have with . In terms of the graph sizes and we have
Using that, we compute that a subgraph matching problem instance with and needs about 10 MB while a problem instance with and already needs 17 MB of storage.
According to  the most difficult part in an iteration of the CSDP solver is the computation of the Cholesky factorisation of a matrix which requires a time proportional to . This result is based on the here fulfilled assumption that individual constraint matrices have nonzero elements and that is somewhat larger than .
8. Relation to the Maximum Clique Formulation
In this section we discuss the connections that can be drawn between our subgraph isomorphism approach and the maximum clique formulation for finding a maximum common subgraph isomorphism. Details about the maximum clique search in arbitrary graphs can be found for example in [14, 15, 26] and references therein. For our discussion we introduce and use the following abbreviations. We denote the adjacency matrix of the association graph, which is introduced to transform the maximum common subgraph problem into a maximum clique problem, with . This is equivalent to the elementwise expression that is, for example, provided in . This can be seen by rewriting the Kronecker product representation of directly into the elementwise expression and exploiting that for 0,1-binary elements is true. We use for the structure matrix and for the indicator matrix, where the integer solution—holding the bipartite matching constraints—is expected to have zero values (compare also Figure 7). Furthermore, the unit matrix is denoted by and is the matrix with all elements one. Note that all the involved matrices have a dimension of . Using these abbreviations one finds the following connection between the matrices involved in the maximum clique formulation ( and ) and our SDP-approach ( and ):
Equation (8.2) shows explicitly the complementary nature of the matrices in the SDP approach and the maximum clique formulation. We can rewrite the combinatorial minimisation problem (5.1) as where is guaranteed by the bipartite matching constraint. Recall, that the 0,1-indicator-vector with length has elements set to one (), a single one within each consecutive segment of length . Exploiting this, we obtain and . Then using (8.2) we find and due to the fixed value on the left-hand-side of (8.4) minimizing is equivalent to maximising with the same constraints for . Therefore, maximising holding the bipartite matching constraint is the search for a set of nodes with size that deviates the least from a clique. In the maximum clique problem, that arises from the largest common subgraph problem, one is searching for a 0,1-vector that indicates the nodes belonging to the maximum clique. Thereby making no further assumptions about the constraints. However, from the construction of the association graph from the largest common subgraph problem, one finds that the largest clique that can be found in the association graph (in the presents of a full subgraph isomorphism) has members and that the obtained indicator vector will hold the bipartite matching constraint. That becomes clear as two nodes in the association graph that violate the bipartite matching constraints are not connected (the adjacency matrix has zero elements where has elements one.) and therefore cannot be part of the same clique. That means also, that the reformulation of the largest common subgraph problem into a maximum clique problem for graphs and with has a trivial upper bound of for the largest clique size . Using the SDP relaxation (7.1) and that we obtain a lower bound for which we can use in (8.4) and get the following expression: Knowing from the Motzkin Straus theorem  (or see, e.g., ) that holds exactly if there is a maximum clique with size indicated by a binary vector , we see that a positive lower bound shows that a clique with nodes in the association graph cannot be found (meaning also that no full subgraph isomorphism can be found in the original problem). Computing the following upper bounds for the clique-size of the (unconstraint) maximum clique search that were, for example, used in : one finds that these are largely above the bound which is inherent in the reformulation of the largest common subgraph problem into a maximum clique problem. In the experiment section we will see that the bounds for the clique size are not even close to the trivial bound arising from the largest common subgraph problem reformulation. Of course they are still useful for the general maximum clique problem of arbitrary graphs. We used the same notation as the authors in  and . Therefore, is the spectral radius of the adjacency matrix of the association graph, is the number of nodes and the number of edges in the association graph and is the density of ones in . Furthermore, denotes the number of eigenvalues of that are smaller or equal to minus one (−1) and is the inverse/complementary adjacency matrix. Computing these bounds for our example subgraph matching problem (see Section 6), one obtains largest upper bounds for the clique size in the range from 35 to 61 (see also the bounds listed in Table 3) which is far above the trivial bound of 7. Note that the computed upper bounds approximately agree with the results for this density () in . However, the lower bound proves that the maximum clique size is lower than which accounts for a tight upper bound . As the clique-based upper bounds (7.1)–(8.9) are largely above the trivial bound this suggests that additional knowledge in the clique formulation (namely the bipartite matching constraints) for this particular problem-formulation can and should be exploited to improve the upper bounds. We summarise our results/experiments in the next section.
9. Results to the Non-Isomorphism Bound
For the illustrative example (see Figure 5) we computed a lower bound using the SDP relaxation (7.1), which proves that a subgraph isomorphism does not exist in this problem instance. The bound and optimal values along with the distribution of the objective values for that problem are shown in Figure 8. Note that the objective values of (5.1) that can be attained are restricted to discrete values as the quadratic term can only have values which are multiples of . With the optimal objective value is 2.0. Note that we did not apply any preprocessing like the elimination of mappings that could not lead to a subgraph isomorphism for the illustrative example.
For a further investigation of the bound (7.1), we created 1000 (connected) random subgraph matching problem instances for which we have chosen the size of the two graphs ( and ) to be and . The edge probability of the graph was set to 0.5 and the probability for an edge in the second graph was set to 0.2. The ground truth for these experiments were computed using the VFlib .
The experiments reveal that for various problem instances, the relaxation is tight enough to conclude that no subgraph isomorphism can exist. We obtained 123 problem instances with a lower bound which proves that no subgraph isomorphism can occur in these problem instances. The other 877 problem instances have a lower bound . From the ground truth data we know that for 538 of these problem instances the combinatorial optimum is larger than 0 indicating that for these cases the relaxation is not tight enough to detect that a subgraph isomorphism cannot be found. These results are summarised in Table 4 in the row labeled with “non-” preprocessing. The upper bounds for the equivalent maximum clique formulation of the same 1000 problems are all largely above the trivial bound of that is inherent in the problem formulation (see Table 5) and in no case this allows to prove that no subgraph isomorphism can occur.
In the next section we investigate the improvements to the bound computation when mappings are removed from the problem formulation that cannot lead to a subgraph isomorphism.
9.1. Towards Larger Problem Instances
We implemented a simple pruning technique to reduce the dimension of the SDP problem size. The basic idea of this is to eliminate all mappings for which the degree (the number of incident edges) of a node in the first graph is larger than the degree of node in the second graph. Such a mapping cannot lead to a subgraph isomorphism. The computational advantage is that every removed mapping reduces the dimension of the problem matrix by one () and also allows to remove the corresponding constraints from the semidefinite problem. A feasibility test then checks whether the remaining mapping opportunities can still lead to a bipartite matching such that all nodes of the smaller graph can be mapped to different nodes in the larger graph. Note that such an infeasible situation might also results in an infeasible semidefinite problem formulation. For this pruning approach one has to keep in mind that the new bound of the remaining problem does not necessarily represent a lower bound of the original problem. This is because in the case of a nonisomorphism a removed mapping could be part of the matching which belongs to the global optimum of the original problem. In fact the combinatorial optimum for the remaining problem can only increase (or stay the same) and the computed bound is a lower bound to the new problem only. However, for problem cases with an isomorphism the optimum does not change (it is still zero) and therefore a bound of the new problem still proves that an isomorphism cannot exist.
Applying the above-described technique to the problem instances in the previous section, the size of the SDP problem matrices reduces from () to in the mean. The number of cases where the relaxation is not tight enough improved from 538 to 334 out of 661 cases with a combinatorial optimum larger than 0 (see also the second row in Table 4).
We expect that the bound could be slightly tightened by including also the inequalities in the SDP relaxation that are currently not considered. However, first findings indicate that the influence might be negligible which could be caused due to the fact that they are already modeled by the incorporated constraints. Another computational improvement within the SDP algorithms could result from an exploitation of the highly regular structure within the SDP matrix.
In this paper we proposed a convex relaxation bound to the subgraph isomorphism problem and showed that the bound is not only of theoretical interest but also applies to several instances of subgraph matching problems. It would be interesting to investigate which criteria a subgraph matching problem has to fulfill to result in a tight relaxation. Such insights could be useful in the process of creating or obtaining object graphs from images for object recognition tasks. At the current stage, reasonable tight bounds result from semidefinite problems with a problem matrix size of up to elements. An improvement of the proposed method could be expected when also the inequalities are included in the SDP relaxation. However, for increasing problem instances the relaxation will get less tight and a lower bound not larger than zero becomes more likely. But note that even less tight solutions in practice still lead to good integer solutions (see, e.g., ).
This research was partly supported by Marie Curie Intra-European Fellowships within the 6th European Community Framework Programme and an Alain Bensoussan Fellowship from the European Research Consortium for Informatics and Mathematics (ERCIM).
H. Wolkowicz, R. Saigal, and L. Vandenberghe, Eds., Handbook of Semidefinite Programming, Kluwer Academic Publishers, Boston, Mass, USA, 2000.
C. Schellewald and C. Schnörr, “Probabilistic subgraph matching based on convex relaxation,” in Proceedings of the 5th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR '05), vol. 3757 of Lecture Notes in Computer Science, pp. 171–186, 2005.View at: Publisher Site | Google Scholar
H. Yu and E. R. Hancock, “Graph seriation using semi-definite programming,” in Proceedings of the 5th IAPR International Workshop on Graph-Based Representations in Pattern Recognition (GbRPR '05), vol. 3434 of Lecture Notes in Computer Science, pp. 63–71, 2005.View at: Google Scholar
M. Agrawal and L. S. Davis, “Camera calibration using spheres: a semi-definite programming approach,” in Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV '03), vol. 2, pp. 782–789, 2003.View at: Google Scholar
I. M. Bomze, M. Budinich, P. M. Pardalos, and M. Pelillo, “The maximum clique problem,” in Handbook of Combinatorial Optimization, D.-Z. Du and P. M. Pardalos, Eds., pp. 1–74, Kluwer Academic, Boston, Mass, USA, 1999.View at: Google Scholar
M. R. Garey and D. S. Johnson, Computers and Intractability, A Guide to the Theory of NP-Completeness, W. H. Freeman and Company, San Francisco, Calif, USA, 1991.
S. J. Benson and Y. Ye, “DSDP3: dual scaling algorithm for general positive semidefinite programming,” Tech. Rep. ANL/MCS-P851-1000, Argonne National Labs, 2001.View at: Google Scholar
A. Graham, Kronecker Products and Matrix Calculus with Applications, Ellis Horwood and John Wiley & Sons, 1981.
Y. Ye, Interior Point Algorithms: Theory and Analysis, John Wiley & Sons Inc., New York, NY, USA, 1997.
Brian Borchers. CSDP 4.8 User’s Guide, 2004.
L. P. Cordella, P. Foggia, C. Sansone, and M. Vento, “Performance evaluation of the vf graph matching algorithm,” in Proceedings of the 10th International Conference on Image Analysis and Processing (ICIAP ’99), p. 1172, IEEE Computer Society, Washington, DC, USA, 1999.View at: Google Scholar