Abstract

Aging as the process in which the built-in entropy decreasing function worsens as internal time passes. Thus comes our definition, “life is a one way flow along the intrinsic time axis toward the ultimate heat death, of denumerably many metabolic reactions, each at local equilibrium in view of homeostasis”. However, our disposition is not of reductionismic as have been most of approaches, but it is to the effect that such a complicated dynamic system as lives are not feasible for modelling or reducing to minor fragments, but rather belongs to the whole-ism. Here mathematics can play some essential role because of its freedom from practical and immediate phenomena under its own nose. This paper is an outcome of hard trial of mathematizing scientific disciplines which would allow description of life in terms of traditional means of mathematica, physics. chemistry, biology etc. In the paper, we shall give three basic math-phys-chem approaches to life phenomena, entropy, molecular orbital method and formal language theory, all at molecular levels. They correspond to three mathematical dsciplines—probability, linear algebra and free groups, respectively. We shall give some basics for the Rényi ()-entropy, Chebyshev polynomials and the notion of free groups inrespective places. Toward the end of the paper, we give some of our speculations on life and entropy increase principle therein. Molecular level would be a good starting point for constructing plausible math-phys-chem models.

1. Introduction

Life science seems to have been prevailing the modern science, which incorporates a great number of relevant subjects ranging from molecular biology to medicine, all of which seem to belong to “reductionism,” that is, “the whole is the totality of its parts.” Molecular biology presupposes, “genotype determines phenotype,” namely, that the gene codes (codons for amino acids) preserved in DNA determine all the phenomenal aspects of the living organisms which are designed by these codes. A traditional way that biology has been tracking is that of “classifying” creatures according to their “species” and molecular biology has been classifying the ingredients in the same spirit but at much smaller, ultramicroscopic level.

Classification is one of the most effective powers of mathematics. This is because one of the main objectives of mathematics is to make a classification of objects of study by sorting out some common features—structures, symmetries—from them to classify them, thereby use is made of neglecting irrelevant specificities and extracting the properties uplifted to absolute abstraction.

In extracting common features of a class of objects of study, mathematicians often appeal to analogy. This reminds us of a seemingly forgotten way of thinking in [1] of making equivalent transformations between similar systems. We tacitly appeal to this principle in the paper.

The description of roles of disciplines stated in [2, page 140] with some modifications would serve as initiation. It says that math treats electromagnetic energy, light, and heat in the Big Bang Era which is also treated independently by phys. The latter goes on to treating the Material Domain consisting of macromolecules, molecules, and atoms in common with chemistry which is the main character in this domain. Then molecular biology comes in and starts treating the Life Domain with chemistry and biology. The domain consists of multicelled creatures, eukaryotic cells, flagella, and bacteria. Then biology deals with the Spirit and Culture Domain with psychology and neuroscience. The domain comprises of mammals and humans. Finally, the Higher Spirit Domain consisting of metahumans is dealt with by philosophy, literature, religion, and art. The author says that this is the scheme of evolution at cosmos level and in that order, entropy decreases, while fittestness and orderliness increase with acceleration. But this ordering would be very much disputable and we just adhere the bottom to the top to make it circular, so that in our modified new scheme, mathematics is among those literary subjects, which is the case, as Goethe said mathematics is frozen music!

In this paper, we will confine ourselves to a few selected constituents of living organisms. As one of main objects of study, one may take up cells and their functions. The reason can be given plenty. They are first of all still visible by microscopes and can be studied as manifestation of reductionism. There are 7 thousand billion cells in the human body and cell membranes play essential roles in maintaining life. The cells have internal and external membranes mainly made of lipids, polysaccharides, proteins, and so forth. Among these ingredients, we will be most interested in lipids and proteins, the first because the oxidation of lipids would lead to malfunction of the cells and the second because the proteins are polymers consisting of 20 basic amino acids joined by peptide bonds and it has been made clear that the production and properties of amino acids are dependent on the codons which are used (see e.g., [3]). Our main mathematical motivation is from [4] where a rather geometrical study is made on molecular biology. The following manifestation is noteworthy of the similarity principle between DNA and (linear) proteins [4, pages 13, 23].

An amino acid is a compound consisting of two parts—constant and variable, where the constant part comprises of ACH, an amino group, a carboxyl group, and a hydrogen atom, while the variable part consists of a side chain which appears in 20 flavors, thus yielding 20 basic amino acids.

A heteropolymer is an assemblage of several kinds of standard molecules—monomers—building a connected chemically homogeneous backbone with short branches attached to each monomer of the backbone.

We may summarize this in Table 1.

Motivated by the way in which the three important factors are treated, that is, circular and linear DNA strings [4, page 19] and in [5, page 741], entropy [4, page 55], coupled with a rather speculative definition of life in [6, pages 124–128] as information preserved by natural selection, we will dwell on the following mathematical stuff which correspond to the respective notions.

In Section 2, we adopt Renyi’s theory of incomplete probability distribution to be compatible with and match the real status of life, expounding the notion of entropy in, and evolution-theoretic aspects of, life.

In Sections 3 and 4.2, we will outline the theory of energy levels of carbon hydrides based on the theory of Chebyshëv polynomials as developed in [7, Chapter 1] comparing the levels of polygonal and circular carbon hydrides. It is hoped this analysis will shed some light on the corresponding problem of linear and circular DNA. In Section 5, we provide some unique exposition of the Chebyshëv polynomials to such an extent that will be sufficient for applications.

In Section 6, we state mere basics of free groups as opposed to direct (i.e., Cartesian) products [4, page 44] of many copies of an attractor.

In Section 7, we assemble some meaningful definitions of life from varied disciplines.

One of the objectives of this paper is to show freedom as well as power of mathematics for treating seemingly irrelevant disciplines. It is freed from realistic restrictions which always show their effect on researches in other akin science, physics, chemistry, and so forth. We hope we have shown that the more complicated the situation is as life, the more feasible for it is mathematics.

2. Shannon’s Entropy

In [8], Shannon developed mathematical theory of communication. Suppose, we have a set of possible events whose occurring probabilities are , with that is, is a finite discrete probability distribution. We are to find a measure satisfying(i) is a symmetric function in for ,(ii) is a continuous function in , ,(iii)If a choice is broken down into two successive choices, then the original should be the weighted sum of the individual values of : for any and any distribution ,(iv).

Theorem 2.1 ([8, Theorem 2]). The only satisfying the conditions (i)–(iii) is of the form where is a constant. Under the normality condition (iv), (2.3) amounts to

We note that simultaneously with and independently of, Shannon, the same result was obtained by N. Wiener. It was Fadeev [9] who formulated Shannon’s theorem in the axiomatic way as above. The base 2 is preferred because they were interested in the switching circuit, on and off. For postulate (iii), c.f. (2.36) below and Remark 2.7, (i).

The proof of a more general theorem of Rényi (Theorem 2.5 below) as well as this theorem is easy except for one intriguing number-theoretic result originally due to Erdös [10]. We give a proof slightly modified yet in the spirit of Rényi’s well-known proof in the case of additive functions.

Definition 2.2. An arithmetic function, that is, a function defined on the set of natural numbers with complex values, is called an additive function if it satisfies for all relatively prime pairs , that is, the gcd of and , denoted by is 1. If satisfies (2.5) for all , it is called a completely additive function.

By the fundamental theorem in arithmetic it is clear that an additive function is completely determined by its values at prime power arguments, and a completely additive function by its values at prime arguments. Indeed, if is the canonical decomposition into prime powers of , then we have in the case of an additive function. In the case of completely additive functions, further decompose into . Let denote the difference operator, . Note that . We may now state Erdös theorem, which states that this limiting condition characterizes the logarithm function among additive arithmetic functions.

Theorem 2.3 (Erdös). If an additive arithmetic functions satisfies the condition then one must has for some constant .

Proof. It suffices to prove (2.8) for a prime power, that is, for all prime powers . We fix and prove that as , where we set Since , (2.8) for also holds true by (2.8) for . Further, vanishes at :.
We construct the strictly decreasing sequence of successive quotients of divided by . By the Euclidean division, starting from with , where . Let denote the greatest integer such that . Then solving this inequality, we get , with indicating the integral part of , that is, the greatest integer not exceeding . Then . From this sequence we construct a sequence all of whose terms are relatively prime to or by subtracting a fixed positive integer from the quotient; if . Then by the way of construction we have
By the additivity of and , we obtain by the vanishingness condition. Hence, noting that and that we may express as a telescoping series By the same telescoping technique, we obtain whence substituting (2.17), we deduce that
Now the double sum on the right of (2.19) may be written as with the increasing labels , .
In view of (2.8) and regularity of the -mean, it follows that
Also the number of terms is estimated by with a constant , by (2.13) and the estimate on .
It remains to estimate (2.19) divided by , thereby we note that since , , say. Hence it follows that as , thereby proving (2.10). Hence it follows that , say, must exist and be equal to , that is, (2.9) follows, completing the proof.

Definition 2.4. A finite discrete generalized probability distribution is a sequence, with weight satisfying Let denote the set of all finite discrete generalized probability distributions . For , in , define their Cartesian product and union by the latter defined for only.

We will characterize the entropy (of order 1) by the following 4 postulates:(i) is a symmetric function of the elements of ,(ii)if indicates the singleton, that is, the generalized probability distribution with the single probability , then is a continuous function in in the interval ,(iii)for , we have (iv)if and , then we have or

Theorem 2.5 (Rényi). The only defined for all and satisfying the above postulates is , where is a constant and is the order 1 entropy of Shannon. If one imposes the normality condition(v),then the only function satisfying the postulates is (2.37).

Proof. Let , which is in view of Postulate (iv), where indicates the singleton distribution. Then by Postulate (iii), satisfies (2.8).
To prove (2.7) is rather involved and depends on (2.33) and Postulate (ii). Since a detailed proof of a very much related result is given [11, pages 548–553], we refer to it and omit the proof here.
Thus by Theorem 2.3, , so that

Corollary 2.6. For an ordinary distribution, Theorem 2.5 reduces to Theorem 2.3.

Proof. It suffices to deduce (iii) in Theorem 2.3 under (2.23). Apparently, it will be enough to treat in the case , . Since and it follows from Postulate (iv) that by Postulate (iii). Since the last two summands on the right of (2.32) amount to in view of Postulate (iv), we arrive at Condition (iii) in Theorem 2.3, completing the proof.

Remark 2.7. (i) We note that (2.2) is equivocal to
Indeed, writing , , whence , we may rewrite (2.2) as which is (2.33).
(ii) As stated in [12, page 503], one of the advantages of the notion of entropy of incomplete probability distribution is that as indicated by (2.30), the factor in (2.4) may be regarded as the entropy of the singleton , and so (2.4) or for that matter, (2.37) with is the mean entropy (average).
(iii) Definition 2.4 is to be stated in a mathematical way as follows. Let denote the set of elementary events, the set of events, that is, a -algebra of subsets of containing , and a probability measure, that is, a nonnegative, additive set function with . The triplet then is called a probability space and a function defined on and measurable with respect to is called a random variable. What Rényi introduced is an incomplete random variable, that is, taking a subset of , he introduced defined on such that . An incomplete random variable may be interpreted as a quantity describing the results of an experiment depending on a chance, all of which are not observable. We use the notion of incomplete random variable to describe the results of evolution, the capricious experiment by the Goddess of Nature, in which not all species are observable since the species which we now see are those which have been chosen by natural selection.

2.1. Rényi’s -Entropy

It would look natural to extend the arithmetic mean in (2.37) by other more general mean values. Let be an arbitrary strictly monotone and continuous function with its inverse function . General mean values of are described as in which case is called the Kolmogorov-Nagumo functions associated with (2.35).

We may replace Postulate (iv) above by (iv’) If , then we have

Theorem 2.8 (Rényi). The only defined for all and satisfying the Postulates (i), (ii), (iii), (iv’) with and (v) is , where is the order entropy of Rényi.

Since , order entropy of Rényi would suit a measure for the incomplete random variables and would be in conformity with Carbone-Gromov notion of dynamical time of variable fractal dimension in Section 8.

A complete characterization of in Theorem 2.8 with general was made by Daróczy in 1963 to the effect that the only admissible are linear functions and linear functions of the exponential function (see e.g., [13, page 313]).

As is stated in [14, page 552] [11], the most significant order information of Rényi is the “gain of information,” which would also work in comparing the microstates of the body. We hope to return to this in the near future.

2.2. Thermodynamic Intermission a là Boltzmann

Quantities of the form or any analogue thereof, played a central role in Boltzmann’s statistical mechanics much earlier than the information entropy. In Boltzmann’s formulation of thermodynamics, is the probability of the system to be in the cell of its phase space. See also the heuristic argument of [15, page 18] below.

We now give a brief description of elements of thermodynamics from Boltzmann’s standpoint (see e.g., [16]).

2.2.1. Entropy Increase Principle

All the natural phenomena have the propensity of transforming into the state with higher probability, that is, to the state with higher entropy. This is often recognized as the entropy increase principle.

Let denote the velocity of molecules (of the same kind) and let denote the velocity distribution function. Then the total number of molecules is given by Boltzmann introduced the Boltzmann -function

The in (2.38) may be regarded as −1 times the Boltzmann -function: . For we may view as a Stieltjes integral which in turn may be thought of as . See Theorem 2.10 below.

He proved.

Theorem 2.9 (The Boltzmann -theorem, 1872). that is, decreases as time elapses.

We state a heuristic argument [15, page 18] toward the natural introduction of the -function.

In statistical mechanics, macrostates (properties of large number of particles such as temperature , volume , pressure ) are contrasted with microstates (properties of each particle such as position , momentum , velocity ). Given a macrostate , there are microstates corresponding to : . Then the entropy of is defined as where is the Boltzmann constant.

Suppose that the th microstate occurs with probability . Consider the system consisting of a very large number of copies (-dimensional Cartesian product) of . Then on average there will be copies (-dimensional Cartesian product) of in , where the norm symbol indicates the nearest integer to “”. Hence for the total number of microstates corresponding to in it follows that Applying the Stirling formula [7, (2.1), page 24]: we find that or for the entropy of the system Under the normality condition , (2.43) simplifies to

Since may be regarded as the arithmetic mean of ’s, it follows from (2.47) that

The first law of thermodynamics or the law of conservation of energy is one of the most universal laws that governs our space. We consider an isolated thermodynamical system, where isolated means that the system does not give or receive heat from outside sources:(i) means the heat, (ii) means the absolute temperature, (iii) means the entropy,

Boltzmann proved.

Theorem 2.10. We have the relation: where is the Boltzmann constant.

Theorems 2.9 and 2.10 together imply that entropy increases, which is the second law of thermodynamics.

Proposition 2.11. The maximum of the entropy (2.38) for a probability distribution of (an information system) is attained for with maximum .

Proof. Since we have the constraint we apply the Lagrange multiplier method. Let where is a parameter. We may find the extremal points of among stationary points which are the solutions to the equation : that is, they are the solutions of the system of equations From (2.52), we have . Substituting these in (2.23), we conclude that the stationary point is . Since the entropy always increases, we conclude that it is attained for (2.50).

Equation (2.50) is in conformity with our intuition that the entropy becomes the maximum when all the variables have the same value. Consider, for example, the case “The dice is cast.’’

3. Molecular Orbitals

This section is devoted to a clear-cut exposition of energy levels of molecular orbitals of hydrocarbons (carbon-hydrides) and is an expansion of [7, Section 1.4].

We will consider the difference between energy levels of molecular orbitals (MOs) of a chain-shaped polyene (e.g., 1,3-butadiene) and a ring-shaped polyene (e.g., cyclopentadienylanion) in Section 4.1 in contrast to the chain-shaped 1,3,5-hexatriene and the ring-shaped benzene treated in Section 4.2.

In quantum mechanics, one assumes that the totality of all states of a system form a normed -vector space and that all (quantum) mechanical quantities are expressed as hermitian operators . For a Hermitian operator , the eigenvectors belonging to its eigenvalue are viewed as the quantum state whose mechanical quantity is equal to . The Hermitian operator expressing the energy of a system is called the Hamiltonian and its quantum state varies with time variable according to the Schrödinger equation where and is called the Planck constant. If , being real and called the energy levels of the system, the solution is given by and is called the stationary state on the ground that its expectation does not change with time. The energy level means the values of the energy which the stationary state can assume.

Example 3.1. We deduce the secular determinant for the molecular orbital consisting of atomic orbitals: where are atomic orbitals and are (complex) coefficients. Let denote the Hamiltonian of the molecule and let where, in general, is to be treated as a complex vector, in which case respectively are to be regarded as respectively and the integrals are over . We write and refer to and as the overlapping integral and the resonance integral between and , respectively. Then for each , . Applying the differentiation rule for the quotient in the form we deduce that whence that is, the system of linear equations For (3.9) to have a nontrivial solution , the coefficient matrix must be singular, so that We apply the simple LCAO (linear combination of atomic orbitals) method with the overlapping integrals , where is the Kronecker delta, that is, for and , so that (3.10) reduces to which is the secular determinant for .

Hereby we also incorporate the simple Hückel method with the Coulomb integral of the carbon atom in the orbit be , and the resonance integral between neighboring C–C atoms in the orbit be , and others are 0.

Theorem 3.2. With all above simplifications incorporated, the secular determinant reads where 0 or according as the molecule is chain-shaped or ring-shaped.

4. Concrete Examples of Energy Levels of MOs

In Section 4.1, we dwell on 1,3-butadiene and cyclopentadienylanion in [21, Section 3] while in Section 4.2, we mention 1,3,5-hexatriene and a ring-shaped benzene treated in [7, Chapter 1].

4.1. Golden Ratio in Molecular Orbitals

This section is an extract from [17, Section 3], referring to the golden ratio in the context of molecular orbitals. We will use the notation therein. Let be the golden ratio. In [17, Section 3], we considered the relation between Fibonacci sequence and the golden ratio, known as Binet’s formula:

There is enormous amount of literature on the golden ratio and the Fibonacci sequence most of which are speculative. We mention a somewhat more plausible and persuasive statement in [18] referred to as an aesthetic theorem in [17], where it is divided into two descriptive statements.

Theorem 4.1 (The hierarchical over-structure theorem). Living organisms, and a fortiori, their descriptions in various media such as paintings, sculptures, and so forth are to be inscribed into pentagons, which are the governing frame of living organisms and which control their structure as a hierarchical overstructure and, as a result, the golden ratio appears as the intrinsic lower structure wherever there are pentagons.

By Theorem 3.2, the secular determinant of the 1,3-butadiene is On the other hand, the secular determinant of cyclopentadienylanion is

Putting (4.12) and (4.13) become respectively.

By Theorem 4.2 it follows that and that the zeros of occur if and only if Hence, occurs if and only if Hence, substituting these in , we see that the energy levels of 1,3-butadiene with 4 electrons are

As we will see in Section 5, . Indeed, these -values are the roots of the polynomial (, See (5.14)), and we obtain , where we note that .

Hence numerical values of energy levels are on substituting

On the other hand, to find energy levels of molecular orbitals of the cyclopentadienylanion, direct computation is possible, but we prefer to apply the theory of circular matrices as in Section 4.2. By Theorem 4.7, the eigenvalues are , , , , ,, . For , the energy levels of the electrons of the cyclopentadienylanion are Thus the golden ratio appears in this context. It would be just natural that it appears for the pentagonal molecule but it is remarkable that the golden ratio appears for 4 carbon atoms case for a chain-shaped hydrocarbons. For the ring-shaped 1,3-cyclobutadiene see the end of Section 4.2.

4.2. Linear and Hexagonal MOs

By Theorem 3.2, the secular determinant of the 1,3,5-hexatriene is On the other hand, the secular determinant of benzene is

As in Section 4.1, by the change of variables (4.4), (4.12), (4.13) become respectively. By the standard technique, we immediately obtain to find the eigenvalues . Then we may find the eigenspaces (molecular orbitals) by solving the system of homogeneous linear equations. Instead of an ad hoc method, we have a universal method using circulant matrices.

Regarding , we apply the recurrence (4.20) and substitute to arrive at which is extremely difficult to decompose.

Thus we appeal to the following theorem making use of Chebysëv polynomials.

Theorem 4.2. Let be the determinant of degree with the first row, second row, and so forth respectively. Then one has where is the Chebyshëv polynomial of the second kind of degree (cf. Section 5), and occurs if and only if Substituting in , one sees that the energy levels of a chain-shaped toluene are

Proof. By standard technique,we may deduce the recurrence Since by (5.3), also satisfies (4.20), we conclude (4.17). The zeros are found from Remark 5.2, completing the proof.

Example 4.3. For, we have, , . This of course immediately comes from.
In the case of , we have 1,3-butadiene with 4 electrons treated in Section 4.1.

Example 4.4. In the case of , we have In this case we need to appeal to a computer or a table to find that , . Then to find , we apply the proportional allotment to get .
Hence it follows that . Other cases are similar.

On the other hand, to find molecular orbitals of the benzene, we may apply the theory of circulant matrices.

Definition 4.5. Forone calls a circulant matrix (or a circulant). Also, putting one calls it the shift forward matrix (which plays a fundamental role in the theory of circulant matrices), where with denoting the Kronekcer symbol, are fundamental unit vectors ( is for push). Using this, we conclude that . Viewing this as a polynomial, one calls a representor of .

Note that circulant matrices are matrix representations of the group ring over or as the case may be, of the underlying cyclic group [19, 20]. For example, is the matrix representation of the group ring , where and is the rotation by .

Letting be the piervot’ny primitive th root of 1, we define a Fourier matrix by means of its conjugate transpose :

Theorem 4.6 (Davis [21]). Any circulant matrix can be diagonalized as by a Fourier matrix , where Thus, in particular, the eigenvalues of are , , , , where is defined by (4.26).

Theorem 4.7. Letting be the determinant of degree whose first row, second row, and so forth are respectively, then are the eigenvalues of . Since , it follows that . Hence the eigenvalues are , , , ,, , .

For , the energy levels of the electrons of 1,3-cyclobutadiene are

Remark 4.8. In deducing Theorem 4.7, full force of Theorem 4.6 is not used. It may also be used in another setting to give a few-lines-proof of the celebrated Blahut theorem in coding theory to the effect that the Hamming weight of a code is the rank of its Fourier matrix (cf. [22]).

5. Chebyshëv Polynomials

In this section we assemble some basics on the Chebyshëv polynomials to an extent for enabling to understand the computations in Section 3 Chebyshëv polynomials may most easily be introduced by the de Moivre formula

Definition 5.1. If , then is a polynomial in of degree and is known as the Chebyshëv polynomial of the first kind and denoted by . Similarly, is a polynomial in of degree known as the Chebyshëv polynomial of the second kind:

The notation is after Tchebyshef (or Tschebyscheff) who first introduced them, proper transcription being ebyšëv. and satisfy the recurrences by which they may be also so defined. respectively, with initial values

We point out that most of the identities for the Chebyshev polynomials are rephrases of the well-known trigonometric identities. For example, the second recurrence in (5.3) is a consequence of the trigonometric identity

As an important case, we rephrase the identity (which follows from addition theorem) Dividing this by , we obtain

Thus, all the results on may be transferred to through (5.7), which fact will show its effect in elucidating the coefficients in (5.8).

Since it turns out that it is usually easier to work with , we will mainly treat the second kind. The reason, which is not made clear in the preceding literature, is that the sine function (corresponding to ) is set on basis as the fundamental wave which vanishes at the origin, and “cosine” is its counterpart (cosine, corresponding to ) (cf. (5.12) below).

We note that although (5.8) are initially obtained for , they are valid for all values of by analytic continuation. If in the substitution , we regard as a complex analytic function, there is no range restriction, but then we need to take into account the multivaluedness of the inverse cosine. It is instructive to consider the situation as a limiting case of the mapping .(i)We have the following concrete expressions: (ii)If is odd, then is a polynomial in and if is even, then is a polynomial in .In the case , (iii)We find the values of and . We apply the pentatonic formula (5.9) for : whence Solving the equation , we immediately find whence

We have a companion formula to (5.10): Here is a point that distinguishes from : Equation (5.15) suggests the following remark which is essential in Section 4.2.

Remark 5.2. occurs if and only if that is, if and only if

Since the coefficients in (5.8) are rather involved, it is natural to seek for more concise form for them. The easiest method is to use the DE satisfied by and , which is widely known. But, since the Chebyshev polynomials are special cases of Gegenbauer polynomials, which in turn are special cases of hypergeometric functions, we are to work with the last to apply the method of undetermined coefficients.

In [17] we appealed to the generatingfunctionology as stated in Comtet [23, page 87]. proving that if we assume the second recurrence formula in (5.3) with the second initial condition (5.4), then we may deduce a universal expression for .

6. Free Groups versus Formal Language Theory

As opposed to the familiar Cartesian product, the free product is the most general construction from a given family of sets. It is indeed a dual concept of the direct product in case of groups.

Let be a given nonempty set, called alphabets. We call any finite sequence a word (or a string): written , where we also call the void sequence a void word, written . Let denote the set of all words on . On there is a concatenation operation, that is, given two words we catenate them to get a new word . Since the associative law holds true, forms a monoid with the identity.

In the case of codons, we have and is the set of all (single-stranded) DNAs. We refer for example, to [5], where the difference between circular and linear DNAs is remarked and also that the present language theory deal with linear strings. Therefore, the codons are treated in pairs.

Now we go on to the notion of free groups. Given a family of groups , is the disjoint union of ’s and is the set of all words on . is a monoid as above. To introduce the group structure, we define the relation if either (i) the word has successive members in the same group and is obtained from by replacing by their product, or (ii) some members of is an identity and is obtained by deleting them. For two words we write if there is a finite sequence such that for each , either or holds. Then we may prove that this relation is an equivalence relation and so we may construct the quotient set on which we may define the multiplication and becomes a group, the free product of ’s.

Thus, as stated in [24, page 13], in order to multiply the word by another word , we write them down in juxtaposition and carry out the necessary cancellations (multiplications in a group) and contractions (deleting identities).

On [4, page 20, page 56, etc.], one finds some interesting arguments on the single-stranded DNAs as words in the free group generated by two alphabets and with , . The ablianized group , where the modulus is the commutator group, is isomorphic to , an infinite cyclic group and would result in excessive cancellation (hybridization). In addition to these 4 natural alphabets, there are synthesized ones including . It would be an interesting problem to find the reason why creatures use only 4 alphabets. We may need to use formal language theory developed so that it can treat both circular and linear strings to consider such a problem and we hope to return to this at another occasion.

7. Definition of Life

A penetrating definition is essential to describing the whole realm of a discipline. We may recall the first passage from Pauling [25].

The universe is composed of substances (forms of matter) and radiant energy.

As in [6, page 71], since the beginning of time at the Big Bang singularity to the present, there has been only finite amount of entropy generated, most of which is in the form of cosmic background radiation. Thus in the sense of classical physics, this is a comprehensive definition.

It may be true, however, that the passage is to be modified according to the modern 20th century physics that matter and energy are verbatim—fermions and add information—bosons to rephrase it:

The universe is composed of energy and information,

Still the first passage helps to have a grasp of the whole picture.

The ultimate objective of all sciences would be attaining “immortality” or at the very least “longevity in good health.” To achieve this, it is necessary to know what life (process) is. In this section we will try to formulate a proper enlightening definition of life by incorporating several ones claimed before.

We first state rather virtual and speculative definition in [6, pages 124–128], though we intend to pursue longevity in vivo.

A “living being” is any entity which codes information (in the physics sense of this word) with the information coded being preserved by natural selection. Thus life is a form of information processing, and the human mind—and the human soul—is a very complex computer program. Specifically, a “person” is defined to be a computer program which can pass the Turing test.

This is rather against the classical definition of life as a complex process based on the chemistry of carbon atoms. In [26] it is suggested that the first living beings—our ultimate ancestors—were self-replicating patterns of defects in the metallic crystals, not carbon. Over time, the pattern persisted and transferred to carbon molecules. Thus, one key feature of life is a dynamic pattern that persists over time, the persistence being due to a feedback with their environment: the information coded in the pattern continually varies, but the variation is constrained to a narrow range by this feedback. Thus:

Life is information preserved by natural selection.

As to the classical definition in terms of carbon atoms, it would be quite natural to go on to the booklet of Carbone and Gromov [4] as carbon is one of the main constituents of the living organisms and the first author’s name is Carbone, meaning carbon. We are particularly interested in [4, pages 12–14]. On [4, page 12] “Crick’s dogma” is stated to which we will return later. As part of definition of life, [4, ll. 1–3, page 13] may be taken into account, which reads:

“The dynamics of the cell is a continuous flow of small molecules channeled by the interaction with macromolecules: DNA, RNA and proteins. The behavior of small molecules obeys the statistical rules of chemical kinetics,

As mentioned in Abstract, we adopt the notion of entropy to view it, incorporating the ideas of Schoenheimer of “dynamic state of body constituents” [27], where a simile is given of a military regime and an adult body.

On [28, page 107] the author elaborates on Schoenheimer’s definition of life and states

Life is a flow in dynamic equilibrium.

This definition resembles the Carbone-Gromov definition of cell dynamics in that both refer to “flow.” It gives, however, an impression that equilibrium is already attained and it should mean local equilibrium. We need to incorporate the ultimate equilibrium, death, which could be compared to heat death [6, pages 66~73].

However, we have a much better and penetrating metaphor in beautiful prose by a Japanese hermit-essayist in the 16th century. It reads:

The river never ceases to flow, its elements never remaining the same.

The foams that it forms appear and disappear constantly and never be stable.

As such are the life and its vessel.

The river is a human adult body with water supply corresponding to food supply. The foams correspond to various chemical reactions that take place in the body: regeneration and degradation. Only oxidation part is missing which is replaced by intensity of flow generated by the mass of water. Although this prose originally was to express the frailty of life, it literally describes the life process as seen by Schoenheimer.

Thus comes our definition of life:

Life is a constant irreversible flow, along the axis of internal time, of resistance against the entropy increase leading to the ultimate heat death, in terms of homeostasis to keep the local equilibrium which works to balance the regeneration and degradation of molecules using the energy produced by oxidating the intake material, where the synthesis is conducted according to the complementarity principle. Or more physically speaking,

Life is a dynamic system with which the negentropy is supplied by degrading and regenerating its components and excreting the waste before they could be damaged by disturbances from outside, making the inner entropy increases.

We will explain why we have come to this definition which incorporates many ingredients scattered around in the literature.

Internal time clock idea came from [29] and this explains the difference between biological and chronological ages.

In [30], although the notion of entropy is introduced to interpret aging, the mechanism is not elucidated as to how life in vivo can continue much longer than the experiments in vitro, which is the notion of dynamic state of constituents first invented by Schoenheimer as alluded to above.

Life is an irreversible flow of dynamically integrated aggregates of local equilibria maintained by homeostasis.

Aging is a malfunction of homeostasis caused by the elapses of internal time.

We do hope by elucidating life activities to get the process of aging back, that is, our wishful definition of life is the following.

Life is a one-way flow of dynamically integrated aggregates of local equilibria maintained by homeostasis, the flow being slowed down by due care of body and mental health.

To formulate “replicative stability of dynamical systems’’ a slightly modified Carbone-Gromov suggestion [4, page 44] would be suitable. Different internal time-clocks might use dynamical time of variable fractal dimension taking into account the number of population in the species. See Section 8.

There is criticism about the evolution theory that it is a tautology saying that those which are likely to survive, or those which survived are judged to be the most fitting. However, it seems that those which are likely to occur, that is, with higher occurrence probability occur more frequently than those which are less likely to occur (with lower probability). When there are several events which are equally likely to occur, then it will be the most natural that all events occur in the long run. The more the events, the more the choices, or uncertainty, whence if there is means of measuring the tendency of occurrence of events, then it is to be an increasing functions of the number of events. Shannon [8] proved a uniqueness theorem for such a measure to the effect that those measures which satisfy some more conditions must be of the form of an entropy (times a constant, cf. Theorem 2.1).

On [31, page 199] some more important notion is mentioned, that is, assimilation and dissimilation.

8. Entropy Increase Principle in Life Activities

We adopt the standpoint of [30, pages 105–116, 213–215] to interpret aging as the increase of entropy in the body. As is stated in earlier sections, in all autonomous systems, the reaction proceeds in the direction of entropy increase. In living organisms–-human bodies in particular, there may be internal time which is governed by the amount of entropy as opposed to outer time. With lower entropy the body can remain young irrespectively of the outer time that elapses. This may explain the big difference between biological and chronological age. There may be the difference up to one generation—25 years among individuals.

We take in food—material of smaller entropy—in our bodies to burn (oxidize, oxidate) it to produce energy. Here a remark is due on the entropy description. Food is material of smaller entropy for the sources it come from, but for our body it may be a big noise and therefore, our oxidation system oxidizes it to produce material of bigger entropy which is to be excreted from the body. For example, glucose (of lower entropy) is absorbed through cell membranes and will get oxidized to become carbon dioxide which is to be excreted as substance of bigger entropy.

In [30] the understanding is that when entropy attains its maximum, the reaction stops and the system comes to equilibrium; in a living organism, it means the death of that individual. Thus there must be a function which makes the inner entropy lower, called “Homeostasis” which controls the amount of entropy to be lower. When one ages, the functions stops working well and then entropy starts increasing to come to the end of the living reaction. With insight of Schoenheimer, the process may be refined as follows.

A biological system represents one big cycle of closely linked chemical reactions.

After death, when the oxidative systems disappear, the synthetic systems also cease, and the unbalanced degenerative reactions lead to the collapse of the “thermodynamically unstable structure elements.”

Thus we may duly call the ultimate death “heat death” and understand the life process as a flow of many chemical reactions in local equilibrium.

Thus aging is to mean the malfunction of homeostasis.

There may be many causes that give rise to the malfunction of the homeostasis. One typical example is the attacks of free radicals, to which we hope to return at another occasion (cf. [32]).