Abstract

The quantum effects for a physical system can be described by the set of positive operators on a complex Hilbert space that are bounded above by the identity operator . For , let be the sequential product and let be the Jordan product of . The main purpose of this note is to study some of the algebraic properties of effects. Many of our results show that algebraic conditions on and imply that and have diagonal operator matrix forms with as an orthogonal projection on closed subspace being the common part of and . Moreover, some generalizations of results known in the literature and a number of new results for bounded operators are derived.

1. Introduction

Let , , and be complex Hilbert space, the set of all bounded linear operators on , and the set of all orthogonal projections on , respectively. For , we will denote by and the null space and the range of , respectively. An operator is said to be injective if . is the closure of . is said to be positive if for all . is said to be a contraction if . is the orthogonal projection on a closed subspace .

The elements of are called quantum effects. The elements of are projections corresponding to quantum events and are called sharp effects. For , the sequential product of and is . We interpret as the effect that occurs when occurs first and occurs second [19]. Let be the Jordan product of . If , we say that and are compatible. We define the negation of by .

In this note, we will study some properties of the sequential product or the Jordan product. Our results show that if one tries to impose classical conditions on and , then this forces and to have closed relations with range relations. For example, let for some . Then, (or ) if and only if and have diagonal operator matrix forms as follows: where as an orthogonal projection on closed subspace is the common part of and . This results give us detailed information of matrix structures between two operators and . It is well known that if or , then if and only if (see [2, Theorem 2.6(a)] and [10, Theorem 2.3]). We generate this result and show that, under some conditions, if and only if and have operator matrix forms:

In [11, Lemma 3.4], the authors had gotten that if and dim , then if and only if . The authors said that they did not know if the condition dim can be relaxed. By some algebraic and spectral techniques, we extend some results in [11] to . Some generalizations of results known in the literature and a number of new results for bounded operators are derived.

2. Main Results

Our main interest is in sequential products of quantum effects. The next result gives some of the important properties of the sequential product.

Lemma 1 (see [2]). Let and .(i) if and only if .(ii)If , then .(iii) if and only if .

Lemma 2 (see [12]). Let be a positive operator. If has the operator matrix representation with respect to the space decomposition , then the following statements hold.(i) as an operator on is positive, .(ii) for some contraction , , .(iii)If for some , then and , , .

Lemma 3 (see [13, Lemma 2.2]). Let be a contraction and let as an operator from into have the operator matrix If is unitary from onto , then and .

In [11], Gudder had obtained that if and , then and are compatible. Based on this result, we get the following interesting results.

Theorem 4. Let and .(i) if and only if ; if and only if .(ii)There exist such that if and only if is a projection.(iii)If there exist such that , then . In addition, if , then if and only if .

Proof. Note that and , as operators on , have the operator matrices respectively, where , , and .
(i) By (4), it is clear that if . On the other hand, if , then since . From we get ; that is and . If , then and in (4). We get that . On the other hand, since and by Lemma 2. Hence, .
(ii) If is a projection, denote and , then . Conversely, suppose that there exist two projections and such that . If is a unit vector, then , so . That is, since is a positive operator. This shows that . Similarly, . Hence, . The two projections and are commutative; therefore, is a projection.
(iii) Since , by item (i). So, ; that is, . Conversely, let . Then there exists such that . Since , and can be written as operator matrices , with respect to the space decomposition , respectively, where is an injective positive operator. If , then . It follows that and .

Let denote the self-adjoint projection onto the closure of . In general, that is a projection does not imply . For example, if , then and are projections and . But we have following result.

Theorem 5. Let and for some . Then, if and only if if and only if and have operator matrix forms as with respect to the space decomposition ; that is, is a range projection on .

Proof. As we know, (see [14, Section 1.2.1]). So, for arbitrary , is a projection if and only if is a projection. If and have the forms (7), then and .
Necessity. Let . Then, , and hence . It follows that . If we consider as matrix form with respective space decomposition , then has the corresponding matrix form . By Lemma 3, we that get . Hence, and . From we get . By similar proof that implies that and . Now, from we derive that ; that is, . We get that . Hence, . If we denote then and can be rewritten as and , where , and are injective, densely defined operators and . Since is projection, this implies that . So, . Hence, ; and have the matrix forms as in (7).

In Theorem 5, .

Theorem 6. Let . Then, if and only if and have operator matrix forms as with respect to the space decomposition . In particular, if and only if .

Proof. By (10), if , then clearly .
Necessity. Observing that and as operators on have the forms as and , where is injective, densely defined. Then is a projection implies that by Lemma 2. So, because is injective, densely defined. can be further written as with respect to space decomposition , where is injective, densely defined. Similarly, has corresponding form as with and being injective, densely defined. So
We say that is injective. In fact, if , then on and hence on . Therefore, on . Hence, for every ,
Since and are injective, we get , which contradicts the assumption. Now, implies that . For every unit vector , Since is contraction, we derive that and for every unit vector . This concludes that . So, . Hence, , and have the matrix forms as in (7).
In particular, if , then and . On the other hand, if , then and in (11). We have . Therefore, ; that is, .

Next, we are now interested in the question of when or . In Theorem 2.6 of [2] it is proved that, if is finite dimensional and , then , and it is asked whether this holds for infinite-dimensional spaces . In [5, Theorem 2.6], the authors answer this question positively. Here, we include a different proof because it is very short.

Theorem 7. Let such that if and only if

Proof. If , then clearly . On the other hand, for arbitrary , let and . Let be the spectral representation of . Thus, has the operator matrix form with respect to the space decomposition , where and . It is clear that . Let have corresponding matrix form. Since , . Hence It follows for all . Since is convergence by strong operator topology to zero, we get that . By Lemma 2, we know that . Hence, for arbitrary . Note that . Hence. and , have the form (15).

Note that if , then (i) ; (ii) ; (iii) . By Theorem 7, it is easy to get the following results.

Corollary 8. Consider .

From Corollary 10, we know that . However, does not imply . One can check this fact by choices and in (see [2]). However, we obtain the following result.

Theorem 9. Let and such that .(i)If , then if and only if and have operator matrix forms with respect to the space decomposition .(ii)If , then if and only if and have operator matrix forms with respect to the space decomposition .

Proof. By (18) and (19), it is clear that and .
Necessity. (i) If , by Lemma 3, and as operators on have the operator matrix forms
If , then . So
By Lemma 2, we have . So, (18) holds.
(ii) If , then and as operators on can be denoted as
We have
By Lemma 2, we have ; that is, and (18) holds.

Let and . Theorem 9 implies that if or , then . In particular, if or , then or hold automatically. We get the following corollary.

Corollary 10 (see [2, Theorem 2.6(a)] and [10, Theorem 2.3]). Let . If or , then if and only if .

In [11, Lemma 3.4], the authors had gotten that if and dim , then if and only if . The authors said they did not know if the condition dim can be relaxed. In the following, we show that the condition dim in [11, Lemma 3.4] can be relaxed.

Theorem 11. Consider . if and only if .

Proof. If , then So
We get which is equal to . Put . Then, and . Product from right, we get
Since and , we derive that is positive, and hence . Note that . We get that ; that is, . Therefore, . Since and , we obtain that . In this case, , as operators on , have operator matrix form and is injective, densely defined. By (26), we get that . By (28), we get, . By (24), we get that . Hence, . Conversely, by (28), it is clear that implies that .

For with and , the sequential product of and is defined by . We interpret to be the measurement obtained when is performed first and is performed second. The sequential product is noncommutative and nonassociative in general. We write if the nonzero elements of are a permutation of the nonzero elements of . “” is an equivalence relation, and when we say that and are equivalent. In this case, the two submeasurements are identical up to an ordering of their outcomes [11].

The results in [11, Theorem 3.1] could be modified as the following. Note that, in [2, Theorem 4.4], it had proved that if and only if .

Theorem 12. Suppose, , , and . If , then .

Proof. Denote respectively. If there exists one corresponding term , , then by Lemma 1. Next, we consider equality for noncorresponding terms.
Case   I. If , then by comparing the third and the fourth components in two sides, we get that ; that is, . So, .
Case   II. If or , then by comparing the first and the third components in two sides, we get that , that is, . By Theorem 11, we get .
Case   III. If , then by comparing the first and the second components in two sides, we get that ; that is, , and hence .
Case   IV. If , then by comparing the first and the second components in two sides, we get that ; that is, . So, .
Case   V. If , then by comparing the first and the third components in two sides, we get that ; that is, . So .
Case   VI. If , then by comparing the third and the fourth components in two sides we get , that is, . By Theorem 11, we get that and .
Case   VII. If or , then by comparing the first and the second components in two sides, we get ; that is, . By Theorem 11, we get that and .

The converse does not hold. Indeed, and yet the elements in need not be commutative. In the following, we give a characterization of the two submeasurements that are identical up to an arbitrary ordering of their outcomes.

Corollary 13. Suppose that , , and . An arbitrary permutation of the elements in is equivalent to if and only if .

Proof. If , then , and clearly an arbitrary permutation of the elements in is equivalent to .
Conversely, by Cases IV and VII in the proof of Theorem 12, we have .

Acknowledgments

The author would like to thank Professor Ziemowit Popowicz and the anonymous referees for their careful reading, very detailed comments, and many constructive suggestions which greatly improved my presentation. A part of this paper was written while the author were visiting the Department of Mathematics, The College of William & Mary. He would like to thank Professors Chi-Kwong Li, Junping Shi, and Gexin Yu for their useful suggestions and comments. Support f the National Natural Science Foundation of China under Grant no. 11171222 and the Doctoral Program of the Ministry of Education under Grant no. 20094407120001 is also acknowledged.