#### Abstract

We present a new proof of the Pythagorean theorem which suggests a particular decomposition of the elements of a topological algebra in terms of an “inverse norm” (addressing unital algebraic structure rather than simply vector space structure). One consequence is the unification of Euclidean norm, Minkowski norm, geometric mean, and determinant, as expressions of this entity in the context of different algebras.

#### 1. Introduction

Apart from being unital topological *-algebras, matrix algebras, special Jordan algebras and Cayley-Dickson algebras would seem to have little else in common. For example, the matrix algebra is associative but noncommutative, the special Jordan algebra derived from it is nonassociative but commutative (and so are the spin factor Jordan algebras), and Cayley-Dickson algebras are both nonassociative and noncommutative (apart from the three lowest dimensional instances). However, these latter sets of algebras share an interesting feature. Each is associated with a function that vanishes on the nonunits and provides a decomposition of every unit as with where indicates that the gradient is evaluated at following which the involution * is applied. For example, (i)on an -dimensional Cayley-Dickson algebra, is the quadratic mean of the components multiplied by , that is, is the Euclidean norm, (ii)on the algebra with component-wise addition and multiplication, is the geometric mean of the absolute values of the components multiplied by , (iii)on the matrix algebra , is the (principal) th root of the determinant multiplied by , (iv)on the spin factor Jordan algebras, is the Minkowski norm. Looked at another way, the Euclidean norm, the geometric mean, the th root of the determinant, and the Minkowski norm are all expressions of the same thing in the context of different algebras. With respect to topological *-algebras, this “thing” supercedes the Euclidean norm and determinant since neither is meaningful in all settings for which the solution to (1.1) and (1.2) is meaningful.

There is another aspect relevant to the Cayley-Dickson algebras. In addition to (1.1) and (1.2), there is a function on the elements of the algebra such that for any unit in the algebra, with This equation set makes no reference to multiplicative structure; that is, it is a general property of the underlying vector space. Indeed, is again the Euclidean norm. In fact, (1.3) and (1.4) can be derived from the (Hilbert formulation) axioms of plane Euclidean geometry without use of the Pythagorean theorem—and as such can be used as the centerpiece of a new proof of that theorem.

So, we first prove the Pythagorean Theorem by deriving (1.3) and (1.4), we then use the latter equations to develop (1.1) and (1.2), following which we demonstrate the assertions of the first paragraph of this Introduction. Ultimately, existence of the decomposition (1.1), (1.2) is forwarded as a kind of surrogate for the Pythagorean theorem in the context of topological algebras.

It will also be seen that there is a hierarchy related to the basic equations, evidenced by progressively more structure accompanying the solution function on particular algebras. The equations (1.1) and (1.2) and have a clear analogy with the form of (1.3) and (1.4). Both cases present a decomposition of the units of an algebra as the product of a particular function's value at that point multiplied by a unity-scaled orientation point dependent on the function's gradient. The equations are nonlinear, in general. However, along with the prescription that vanishes on the nonunits, the above particular matrix, Cayley-Dickson, and spin factor Jordan algebras, also happen to satisfy the additional property that there is a real constant such that for any unit in the algebra, Replacing with in (1.1), the above implies a decomposition of the inverse of a point as Furthermore, multiplying both sides of (1.6) by indicates that the above is a linear equation for . One can be even more restrictive and consider the set of algebras on which a function exists satisfying all of the above where in addition (1.5) is strengthened to for any two units . In this case, the units form a group and is a homomorphism on this group. The octonions occupy a special place as an algebra satisfying this prescription that is not a matrix subalgebra.

#### 2. Euclidean Decomposition

Theorem 2.1 (Pythagoras). * In a space satisfying the axioms of plane Euclidean geometry, the square of the hypotenuse of a right triangle is equal to the sum of the squares of its two other sides. *

The theorem hypothesis is assumed to indicate the Hilbert formulation of plane Euclidean geometry [1]. One will refer to the point set in question as the “Euclidean plane.” Points will be denoted by lower case Roman letters and real numbers by lower case Greek letters.

*Proof. *We begin by providing an outline of the proof.

A vector space structure is defined on the Euclidean plane after identifying one of the vertices of the hypotenuse of the given right triangle as an origin . We define the Euclidean norm implicitly as the function giving the length of a line segment from the origin to any given point in the plane. Since the Hilbert formulation includes continuity axioms, we can employ the usual notions relating to limits and thereby define directional derivatives. The crux of the proof, Lemma 2.3, is the demonstration that the parallel axiom implies the existence and continuity of the directional derivatives at points other than the origin, and the largest directional derivative at a point associated with a unit length direction line segment has unit value and is such that its direction line segment is collinear with line segment . The novelty lies in the necessity that this be accomplished in the absence of an explicit formula for the Euclidean norm. A Cartesian axis system is now generated from the two sides of the given triangle forming the right angle, following which an isomorphism from to vector space is easily demonstrated. Using this isomorphism, Lemma 2.3 is seen to imply the existence of the gradient for not at the origin, with its characteristic property regarding generation of a directional derivative from a particular specified direction line segment, and such that the origin, , and are collinear. It is then a simple matter to show and . For , a solution to the latter partial differential equation is supplied by . This solution is unique because implies that and are collinear with the origin, so that the equation can be written as an ordinary differential equation—which is easily shown to have a unique solution. This explicit representation of the Euclidean norm implies the Pythagorean theorem, thus concluding the proof.

We now fill in details the above argument.

Given a right triangle with line segment as the hypotenuse, we define a function that gives the length of the line segment for any point in the plane, with the convention that . The continuity axioms that are part of the Hilbert formulation support the equivalent of the least upper bound axiom for the real number system, and we have the usual continuity properties related to , whose elements identify lengths of line segments in particular. Thus, the usual notion of limit can be defined and is assumed.

*Definition 2.2. *A *direction line segment* with respect to a particular point is any line segment one of whose endpoints is the given point. If the following limit exists, the *directional derivative* of at with respect to direction line segment is
where formally denotes a point on the line containing such that the line segment defined by this point and has length given by the product of and the length of , and lies between this point and if and only if .

Lemma 2.3. *For any point different from , the directional derivative of at specified by exists and is continuous at . Furthermore, the largest directional derivative at associated with a direction line segment of a particular fixed length is such that are collinear with between and , and if the direction line segment has unit length, then the largest directional derivative has unit value. *

*Proof of Lemma 2.3. *Let the length of be . If lies on the line containing with between and , it is clear that the directional derivative exists and has a value given by , since the expression inside the limit in (2.1) has this value for all . In the same way, we can demonstrate that the directional derivative exists when is on the line containing but is not between and —in which case the directional derivative has value .

Thus, suppose is not on the line containing . Figure 1 can be used to keep the following constructions in context. Consider the orthogonal projection of the point to the line containing , that is, the point on the line containing such that is a right angle (the expression denotes the angle within the triangle that is formed by the two line segments and ). Consider the circle centered at of radius . We claim that is in the region enclosed by the circle, that is, . Indeed, suppose this is false. Consider the circle with center at of radius , and let the tangent line to this circle at the point be denoted . Being a tangent, all of the points of other than will be such that
Note that intersects the line containing at a right angle (a tangent to a circle at a particular point is perpendicular to the circle radius at that point). But there is only one line through that meets the line containing at a right angle, and that is the line containing , as previously defined. So must contain , that is, is on but is not the point . The first inequality of (2.2) would then imply that *ϵ* is such that . This contradicts the second inequality of (2.2). Hence, we have established our claim that .

Let be the line perpendicular to at . Since is not the point , the line containing intersects in at most one point. In what follows, we assume is small enough so that no point of is on . Again consider the circle centered at with diameter . Its tangent at must intersect the line containing at the point (since is not on , the tangent cannot be parallel to ). Being on a tangent but not the point of tangency itself, is necessarily external to the region enclosed by the circle ( is greater than the circle radius). This circle intersects the line containing at two points defining a diameter of the circle. We denote as the one of these two points such that is between and .

We are first required to show the existence of the limit in (2.1) (for ), which we can write as , since and both lie on the aforementioned circle. To do this, we will initially assume that and show that exists, after which it will be clear that an entirely analogous argument establishes the same value for .

We have
since is between and . Note that is the length of , according to Definition 2.2. The triangle is similar to (ultimately, because the tangent line to the circle at implies that is a right angle). Let be the length of . It follows that

If and are ever the same point for some value of , then is perpendicular to , and will remain so for any other , so that and will always be the same point. In that case, , and . It then follows that the right-hand side of (2.5) tends to zero with (since this right-hand side is ), so the right-hand side of (2.4) also tends to zero with , which means that the first term on the right-hand side of (2.3) tends to zero with . But since , the second term on the right-hand side of (2.3) is zero. It then follows that and, in particular, the required limit exists.

Thus, suppose and are different, and consider . This defines a set of similar triangles for all values of , because the angle does not change as varies and remains a right angle. Consequently, is a nonzero constant for all since this is the ratio of two particular sides of each triangle in this set of similar triangles. This means that , since could not otherwise remain a constant because tends to zero with . On the other hand, because is also constant as varies (being a ratio of a different combination of sides of these same triangles), it must also follow that (which also means that is not between and for small ). So, it must be that the right-hand side of (2.5) tends to zero as tends to zero. Consequently, the right-hand side of (2.4) also tends to zero as tends to zero (because, as we have noted, is constant as varies). Hence, the first term on the right-hand side of (2.3) also tends to zero as becomes small. This means that the left-hand side of (2.3) has the same limit as tends to zero as the second term on the right-hand side of (2.3) (assuming the limit exists). But, we have already noted that the term is a constant as *ϵ* varies, since it is determined by the ratio of particular sides of , and (as we have noted) for any all such triangles are similar. In fact (removing the absolute value sign in the numerator), we further claim that
is a constant for all small . To see this, recall that for small , is not between and , and is constant. Now, if changes from being between and versus not being between and as varies, the latter angle must change from being an acute angle to being an obtuse angle, or vice versa, which contradicts the fact that the angle is constant for all . For small , it follows that is always between and , or is always between and —which establishes our claim that (2.6) is constant. Therefore,
An analogous argument establishes , because are similar triangles for * all* nonzero values of . This establishes the limit in (2.1).

Next, we establish the continuity of each directional derivative. Consider any sequence of line segments such that form a parallelogram for all , and the limit of the length of the line segments is zero (which implies that the limit of the length of the line segments is also zero). To establish the continuity of a directional derivative at it is required to show that the left-hand-side of the following equation is zero:
For each , is the ratio of particular sides of any member of a particular set of similar triangles, as is also the case for as above. Furthermore, as increases, the ratio of the lengths of the sides of the triangle relevant to converges to the same ratio as that of the corresponding triangles relevant to , since converges to and converges to . Therefore, necessarily tends to zero, implying continuity of the directional derivative at .

We next show that the largest directional derivative at associated with a direction line segment of a particular fixed length is such that are collinear with between and . We have from (2.7) (and the equation in the following sentence) that the directional derivative is . Thus, we only need to show that , with equality if and only if is on the line containing , because once that is established it is easy to show that if are collinear, then will be negative if is not between and , and will be positive otherwise.

Now, is the length of the hypotenuse of right triangle , and is the length of the side that is on the line containing . So, we only need to show that a nonhypotenuse side of a right triangle has a shorter length than the hypotenuse. Thus, consider any right triangle with being the right angle (vertex here has nothing to do with vertex of our earlier given right triangle ). Consider a circle centered at having radius . This circle intersects the line containing at a point . Suppose the length of is less than the length of , meaning that the length of the hypotenuse is smaller than the length of one of the other sides. Then is external to the circle. So, consider a second circle centered at but now with radius given by the length of . This circle intersects the line containing at the point . The two circles are concentric, with the circle containing lying wholly external to the circle containing . Now, the second circle (containing ) has a tangent at making a right angle with the line containing (tangents are perpendicular to the radius at the point of tangency). But the line containing is also perpendicular to the line containing (since is already given as a right angle). So the point (which lies on our first circle of radius given by the length of ) must also be a point on the tangent line at the point of our second circle. This is impossible since, given two concentric circles, a tangent to a point on the circle of greater radius cannot intersect the circle of smaller radius. This contradicts the assumption that the hypotenuse is smaller than the length of one of the sides. Furthermore, the hypotenuse cannot equal the length of one of the other sides of the right triangle, because in that case a circle centered at with radius equal to the length of the hypotenuse would intersect the right triangle at the two points and , which would require that a tangent line to the circle at (making a right angle with the line containing ) would have another point of intersection with the circle (i.e., at , again since the line containing is also perpendicular to the line containing and there can be only one such perpendicular). Thus, unless are collinear, , and it is immediately verified that if the points are collinear. It is then evident that the value of the largest directional derivative is . Thus, if , the largest directional derivative has unit value. Hence, Lemma 2.3 is proved.

Definition 2.2 actually suggests two operations, and these will be referred to in Lemma 2.5 as “the Definition 2.2 associated operations.” That is, we define the (more general) expression to represent subject to the following. (I)The multiplication of a scalar with a point, , is defined to be the point on the line containing such that and such that is between this new point and if and only if is negative. (II)The “sum” of two points, , is defined as follows. (i)If are not collinear, is defined to be the point such that the vertices form a parallelogram.(ii)If either or is the point , then is the point that is not , or is if and are both .(iii)If and are the same point, then is the point on the line containing such that and is not between and .(iv)If are distinct and collinear, (1)when is not between then is the point on the line containing such that and is not between and , (2)when is between , (a)if then is the point on the line containing such that and is between and , (b)if then is the point on the line containing such that and is between and , (c)if then is .

With this understanding, it is clear that as defined in Definition 2.2 is the same thing as . Of course, this suggests operations on a vector space.

*Remark 2.4. *The central role of the derivative of the norm function as featured in Definition 2.2 is not without precedent. The derivative of the norm also plays an important role in semi-inner product spaces [2, 3] (and premanifolds [4]), where the condition that the space be continuous (or, alternatively, uniformly continuous) can be shown to be equivalent to the condition that the norm is Gateaux differentiable (or, alternatively, uniformly Frechet differentiable). Naturally, once the isomorphism between and is established, our definition is seen to be analogous to the standard one.

Lemma 2.5. *There is an isomorphism between the vector space with component-wise addition of elements and component-wise multiplication of elements by scalars, and the Euclidean plane with the Definition 2.2 associated operations. *

*Proof of Lemma 2.5. *We first identify a particular Cartesian axis system on the plane. One Cartesian system is already present, consisting of the lines containing the line segments forming the right angle of the right triangle given at the outset of this proof (i.e., and ). However, since the length function is referenced to , we use the latter axis system to set up a different Cartesian system at . Using the parallel axiom, consider the line though that is parallel to the line containing , and furthermore (again using the parallel axiom) consider a point on this new parallel line such that the vertices form a rectangle. The lines containing and are our Cartesian system (the “-axis” and the “-axis”).

We identify with the ordered pair . Any point in the plane different from is associated with a unique ordered pair implied by the line segment . That is, we take the orthogonal projection of to the line containing and take to be the length of the line segment formed by and this projection of . is negative or positive depending on whether or not is between and the projection of to the line containing . is defined analogously with respect to and the -axis. Conversely, every ordered pair of real numbers is associated with a point in the plane. That is, for we find the point on the -axis with such that is between and if the sign of is negative, and is not between and if the sign of is positive—and similarly for a point on the -axis relating to . Then the point associated with is the point such that is a rectangle. Its existence and uniqueness is guaranteed by the parallel axiom. It is further obvious that the first construction associates to if and only if the second construction associates to .

The above is therefore a one-to-one mapping between the points of the Euclidean plane and the points of (ordered pairs of real numbers), explicitly employing the parallel and betweenness axioms. becomes a vector space once we specify that ordered pairs (i.e., vectors) representing points in the plane can be added together component-wise and multiplied by scalars component-wise. The isomorphism between vector space and with the Definition 2.2 associated operations is then easily verified. Thus, Lemma 2.5 is established.

It follows that the isomorphism in the above lemma leads to an expression for the directional derivative in the vector space that gives the same result as it did in the original Euclidean plane. In particular, basic arguments from multivariable calculus establish the existence of a total derivative, the gradient , as the ordered pair of directional derivatives of at with direction line segments defined by unit length line segments parallel to the -axis and -axis such that the directional derivatives are the inner product of with the ordered pair in corresponding to a particular direction line segment. For example, this is seen from with given by the mean value theorem. Applying the definition of directional derivative to both sides above one sees that the directional derivative is given by the usual inner product of with the direction line segment. This is accomplished without use of the Euclidean norm or prior use of the inner product operation. Being an ordered pair, is a point in the plane, and we can refer to . Furthermore, we have already established that the largest directional derivative of at associated with a line segment of length is suchthat are collinear with between and . A standard argument establishes that , , and are then collinear, and is not between and (because the gradient is proportional to the direction line segment associated with the greatest directional derivative and, according to Lemma 2.3, this direction line segment lies on the line containing with not between and ). So, we have , for . Furthermore, a standard multivariable calculus argument establishes that the magnitude of (the length of , i.e., ) is the value of the largest directional derivative associated with a direction line segment of length unity—which according to Lemma 2.3 is unity. Thus, (the magnitude of is unity), so that since is the length of the hypotenuse .

In fact, it is easy to see that the equations of the last sentence of the prior paragraph pertain not just but to any point in the plane different from . They hold trivially if is a point on the -axis or -axis. For any other point , we can consider to be the hypotenuse of a right triangle , where is the orthogonal projection of to the -axis, and then proceed in the same manner as we have already done for (noting that our axes are unchanged). Also, as stated at the outset, , and lately we have the identification of as the ordered pair . Including the latter, the equations in the last sentence of the prior paragraph constitute a partial differential equation. For identified in our Cartesian system as , it is easily verified by standard differentiation that one solution is . To show that this solution is unique, consider that means that and and are collinear. Thus, for the points on the line containing , we have the ordinary differential equation with and . This has a unique solution, so that the already identified solution, , must be the only solution. Thus, we have derived the Euclidean norm, and hence proved the Pythagorean theorem.

Equations (1.3) and (1.4) could also be used in the definition of the Euclidean norm as follows.

Corollary 2.6. *A function is the Euclidean norm if and only if it is continuous, vanishes at the origin, and at any other point it satisfies (1.3) and (1.4). *

Equations (1.3) and (1.4) indicate that, from a differentiable viewpoint, the Euclidean norm is a scaling-orientation function in the decomposition of a point as the product of a scalar with a unity-scaled orientation point derived from the function's gradient. We can consider this set of equations to represent “Euclidean decomposition."

But the Euclidean norm does not address the multiplicative structure of an algebra and so does not have an essential role in most algebras. Instead, we shall see that the role of on the vector space is taken up by on topological *-algebras over , and we will consider (1.3) and (1.4) so modified to represent “Jacobian decomposition.”

#### 3. Jacobian Decomposition and Inverse Norm

The “defining” equations of the Euclidean norm, (1.3) and (1.4), make no reference to the multiplicative structure of an algebra. Nevertheless, the Euclidean norm has application to the Cayley-Dickson algebras (an unending sequence of real unital topological *-algebras beginning with the only four real normed division algebras, , , the quaternions, and the octonions). Each of these algebras is characterized by a basis , with for any nonnegative integer . Multiplication is distributive and thus defined by a multiplication table relevant to . In particular, and for . Any point (with each ) has a conjugate , with evidently an involution. In particular, , where is the Euclidean norm. Thus, . Hence, the Euclidean norm in this case helps express the inverse of a point. Being the Euclidean norm, satisfies (1.3) and (1.4), and so also defines a Euclidean decomposition of the point. But given the above involution, it is easy to show that (where, as always, represents evaluation of the gradient at the point followed by application of the involution). Substituting the above into (1.3), we obtain . Since (1.4) holds for all , and all such points have inverses, we must also have . Now we have a formulation for the Euclidean norm that makes reference to unital algebraic structure. On Cayley-Dickson algebras, it is equivalent to the Pythagorean theorem. When this latter decomposition exists on a topological -algebra but is not the Euclidean norm, it can be considered to be an algebraic ghost of the Pythagorean theorem.

*Definition 3.1. * For a topological -algebra defined on , a continuous function is an *inverse norm* if it is zero on the nonunits of , as a function restricted to the domain it is differentiable on the units of , and for any unit,
with

The above equations mimic the equations for the Euclidean norm referred to in Corollary 2.6, but instead decompose a point as a function's value at the point multiplied by a unity-scaled orientation point dependent on the function's gradient at the * inverse* of the point (or, alternatively to the Euclidean norm's direct expression of a point, (3.2) and (3.3) express the * inverse* of a point—i.e., substituting for in the latter equations). Thus, we use the term “inverse norm.”

Of course, from (3.1) we have already shown the following.

Theorem 3.2. *The Euclidean norm is an inverse norm on the Cayley-Dickson algebras. *

However, inverse norms have applicability well beyond the Cayley-Dickson algebras.

Theorem 3.3 (Jacobi). * For a member of the algebra of real matrices , let . If is a unit, then
**
where * indicates matrix transpose. *

The above is a well-known immediate consequence of the Jacobi’s formula in matrix calculus (the latter expresses gradient of the determinant in terms of the adjugate matrix).

Corollary 3.4. *For the algebra of real matrices ,
**
is an inverse norm.*

*Proof. * is evidently continuous everywhere, as well as differentiable on the units (the invertible matrices), and it vanishes on the nonunits. For any unit on this algebra, we have , and
For a unit , (3.4) implies . Substituting this into (3.6), and using (3.5), we obtain
If we evaluate on the left-hand-side above at the point instead of evaluating it at , we obtain (3.2). Applying in (3.5) to both sides of (3.7) we obtain (3.3).

In analogy with Euclidean decomposition (1.3) and (1.4), we can consider the equations of Definition 3.1 to represent “Jacobian decomposition” (i.e., in view of Theorem 3.3).

For the algebra with component-wise addition and multiplication, it is also easy to show that an inverse norm is given by the product of with the geometric mean of the absolute values of the components of a point. That is, for a point , set Note that is continuous and vanishes on the nonunits. If is a unit, then It is then a simple task to verify that satisfies the requirements of Definition 3.1 (with as the identity).

Now we turn to Jordan algebras.

Theorem 3.5. *The Minkowski Norm is an inverse norm on the spin factor jordan algebra.*

*Proof. *Thinking of in the format of , write its points as , with and . We introduce a multiplication operation such that
where “” is the usual inner product on . This multiplication defines a commutative but nonassociative algebra, the spin factor Jordan algebra [5]. The multiplicative identity element is evidently the point where and . An inverse element exists for points such that . That is, for
when .

Now we define such that for ,
where the above square root represents the principal value. On the domain comprised of the units of the spin factor Jordan algebra (the points , , such that ), we have
where the three equalities follow from (3.11) and (3.12). Hence, on the units, we have . Taking the involution to be the identity, the latter equation is equivalent to the first equation of Definition 3.1.

Applying to both sides of the first equality in (3.13) and using (3.12), we obtain for any unit . This is equivalent to the second equation of Definition 3.1.

On the other hand, the Jordan algebra obtained from the algebra of matrices has the product of two of its members as given by where and indicate the usual matrix product. We then have , where **1** is the identity element in the algebra (in this case, ). Therefore, is the usual matrix inverse. Consequently, the associated inverse norms are the same as those for the algebra of matrices . Thus, Jacobian decomposition holds for the Jordan algebra obtained from the matrix algebra.

Supplying an inverse norm nominally requires solution of a nonlinear partial differential equation (3.2). However, if we apply further restrictions on the nature of , one can obtain a linear equation. In particular, for each algebra example considered up till now there is a constant such that so that (3.2) evaluated at instead of implies (1.6) and

However, not all unital algebras have an inverse norm satisfying (3.15). First, since an inverse norm is zero on nonunits, and a nonunital algebra consists only of nonunits, the inverse norm on a nonunital algebra is identically zero. On the other hand, one might ask whether an inverse norm satisfying (3.15) exists on the unital hull [5] of a nonunital topological algebra.

Theorem 3.6. *The unital hull of a nonunital topological algebra does not have an inverse norm satisfying (3.15). *

*Proof. *The unital hull of a nonunital algebra is defined by elements for and , with component-wise addition of elements and multiplication defined by
where indicates the product between elements of . The identity element is . Equation (3.15) requires that the units satisfy
where . Using the multiplication rule, we can write this as
Thus, it is required that , so that , where indicates some function independent of . In addition, the right-hand-side of (3.18) requires that the second component of the left-hand-side of (3.18) be zero. But with regard to variation in , the requirement that means that the first term of this second component is , the second term is , and the third term is . It is thus impossible for this second component to remain zero as varies unless is identically zero. But in that case, the equations of Definition 3.1 cannot be satisfied for the units. Thus, an inverse norm satisfying (3.15) does not exist.

One can get even more restrictive and consider algebras for which not only is constant on the units (i.e., (3.14) is satisfied) but in addition the units constitute a group on which a multiple of is homomorphism. In fact, the inverse norm on the first four Cayley-Dickson algebras satisfies this prescription (the latter are the only real normed division algebras by Hurwitz's theorem, and the Euclidean norm is a homomorphism on their units). The inverse norm on the matrix algebra also satisfies this requirement (i.e., is a homomorphism on the group of units). Along these lines, we observe that the Cayley-Dickson algebras and the set of subalgebras of the real matrix algebra overlap on the algebras of real numbers, the complex numbers, and quaternions—which happen to be the only associative real normed division algebras. The Cayley-Dickson algebra sequence contains the only other real normed division algebra—the (noncommutative/nonassociative) algebra of octonions. Thus, the octonions provide an example of an algebra with an inverse norm that is a homomorphism on the group of units, but without a representation as a matrix subalgebra. This point of nonoverlap is typical of the exceptionalism of the octonions [6].