Abstract

Complex biochemical pathways can be reduced to chains of elementary reactions, which can be described in terms of chemical kinetics. Among the elementary reactions so far extensively investigated, we recall the Michaelis-Menten and the Hill positive-cooperative kinetics, which apply to molecular binding and are characterized by the absence and the presence, respectively, of cooperative interactions between binding sites. However, there is evidence of reactions displaying a more complex pattern: these follow the positive-cooperative scenario at small substrate concentration, yet negative-cooperative effects emerge as the substrate concentration is increased. Here, we analyze the formal analogy between the mathematical backbone of (classical) reaction kinetics in Chemistry and that of (classical) mechanics in Physics. We first show that standard cooperative kinetics can be framed in terms of classical mechanics, where the emerging phenomenology can be obtained by applying the principle of least action of classical mechanics. Further, since the saturation function plays in Chemistry the same role played by velocity in Physics, we show that a relativistic scaffold naturally accounts for the kinetics of the above-mentioned complex reactions. The proposed formalism yields to a unique, consistent picture for cooperative-like reactions and to a stronger mathematical control.

1. Introduction

1.1. The Chemical Kinetics Background

The mathematical models that describe reaction kinetics provide chemists and chemical engineers with tools to better understand, depict, and possibly control a broad range of chemical processes (see, e.g., [1, 2]). These include applications to pharmacology, environmental pollution monitoring, and food industry. In particular, biological systems are often characterized by complex chemical pathways whose modeling is rather challenging and can not be recast in standard schemes [315] (see also [1619] for a different perspective). In general, one tries to split such sophisticated systems into a set of elementary constituents, in mutual interaction, and for which a clear formalization is available [2025].

In this context, one of the best consolidated, elementary scheme is given by the Michaelis-Menten law. This was originally introduced by Leonor Michaelis and Maud Menten to describe enzyme kinetics and can be applied to systems made of two reactants, say (the binding molecule or, more generally, the binding sites of a molecule) and (the free ligand, i.e., the substrate), which can bind (and unbind) to form the product . If we call the concentration of free ligand, the saturation function (or fractional occupancy), namely, the fraction of bound molecules (), and, accordingly, the fraction of the unbound molecules, under proper assumptions, one can write where is the proportionality constant between response and occupancy (otherwise stated, it is the ratio between the dissociation and the association constants). In particular, as standard, it is assumed that the reaction is in a steady state, with the product being formed and consumed at the same rate, the free ligand concentration is in large excess over that of the binding molecules in such a way that it can be considered as constant along the reaction, and all the binding molecules are equivalent and independent. Also, the derivation of the Michaelis-Menten law is based on the law of mass action.

By reshuffling the previous equation we get which allows stating that is the concentration of free ligand at which of the binding sites are occupied (i.e., when , then ). Thus, denoting with the half-saturation ligand concentration, we get This equation represents a rectangular hyperbola with horizontal asymptote corresponding to full saturation; that is, ; this is the typical outcome expected for systems where no interaction between binding sites is at work [26]. This model immediately settled down as the paradigm for Chemical Kinetics, somehow similarly to the perfect gas model (where atoms, or molecules, collisions apart, do not interact) of the Kinetic Theory in the early Statistical Physics [27]. Nevertheless, deviations from this behaviour were not late to arrive: the most common phenomenon was the occurrence of a positive cooperation among the binding sites of a multisite molecule. Actually, many polymers and proteins exhibit cooperativity, meaning that the ligand binds in a nonindependent way: if, upon a ligand binding, the probability of further binding (by other ligands) is enhanced, the system is said to show positive cooperativity.

To fix ideas, let us make a practical example and let us consider the case of a well-known protein, that is, the hemoglobin. This is responsible of oxygen transport throughout the body and it ultimately allows cellular respiration. Such features stem from hemoglobin’s ability to bind (and to dislodge as needed) up to four molecules of oxygen in a nonindependent way: if one of the four sites has captured an oxygen molecule, then the probability that the remaining three sites will capture further oxygen increases, and vice versa. As a result, if the protein is in an environment rich of oxygen (e.g., in the lungs), it readily binds up to four molecules of oxygen, and, as much readily, it gets rid of them when crossing an oxygen-deficient environment. To study quantitatively its behaviour one typically measures its characteristic input-output relation. This can be achieved by considering a set of elementary experiments where these proteins, in the same amount for each experiment, are prepared in a baker and allowed to bind oxygen, which is supplied at different concentrations for different experiments (e.g., ). We can then construct a Cartesian plane, where on the abscissas we set the concentration of the ligand (oxygen in this case, i.e., the input) while on the -axes we put the fraction of protein bound sites (the saturation function, i.e., the output). In this way, for each experiment, once reached the chemical equilibrium, we get a saturation level and we can draw a point in the considered Cartesian plane; interpolating between all the points a sigmoidal curve will emerge (see Figure 1). Archibald V. Hill formulated a description for the behavior of with respect to : the so-called Hill equation empirically describes the fraction of molecules binding sites, occupied by the ligand, as a function of the ligand concentration [2831]. This equation generalizes the Michaelis-Menten law (2) and reads as where is referred to as Hill coefficient and can be interpreted as the effective number of binding sites that are interacting with each other. This number can be measured as the slope of the curve versus , calculated at the half-saturation point. Of course, if there is no cooperation at all and each binding site acts independently of the others (and, consistently, Michaelis-Menten kinetics is restored), and vice versa; if , the reaction is said to be cooperative (just like in hemoglobin), and if the cooperation among binding sites is so strong that the sigmoid becomes close to a step function and the kinetics is named ultrasensitive.

The Michaelis-Menten law, together with the extension by Hill, provided a good description for a bulk of chemical reactions; however, things were not perfect yet. For instance, some yeast’s proteins (e.g., the glyceraldehyde 3-phosphate dehydrogenase [32]) produced novel (mild) deviations from the Hill curve: for these enzymes, the cooperativity of their binding sites decreases while increasing the ligand concentration. The following work by Daniel E. Koshland allowed understanding this kind of phenomenology by further enlarging the theoretical framework through the introduction of the concept of negative cooperativity. In fact, in the previous example, beyond the positive cooperation between the binding sites there are also negative-cooperative effects underlying. Their effective action is to diminish the overall binding capabilities of the enzyme and thus to reduce the magnitude of its Hill coefficient.

1.2. The Mechanics Background

The progressive enlargement of a theoretical scaffold to fit the always increasing amount of evidences is a common feature in the historical development of scientific disciplines [36, 37]. This is the case also for Mechanics and, as we will see, the analogy with Chemical Kinetics goes far beyond this feature.

Beyond Kinematics, which describes the motion of systems without considering their mass or the forces that caused the motion, in the seventeenth century Newton gave a sharp description of Mechanics, in the form of laws describing how masses dynamically respond when stimulated by an external force (or moment). Here, the input is the force while the output is the motion of the body. The Newtonian dynamics has been ruling for centuries and, in fact, it was so well-consolidated that scientists, among which Giuseppe L. Lagrange, William R. Hamilton, and Carl G. J. Jacobi, later reformulated the entire theory in a powerful and elegant variational flavor. The theory was overall brilliant to explain the perceivable reality, but with exceptions emerging in the limit of too little or too fast.

We will focus on the latter. In the Newtonian world, if an applied force is kept constant over a mass, this will constantly accelerate, eventually reaching diverging velocities. This was perfectly consistent with the general credo that the speed of light was infinite. However, this postulate broke down in 1887 when the famous experiment by Albert A. Michelson and Edward Morley proved that such a velocity is actually finite. The next years were dense of novel approaches and ideas by many scientists, as Hendrik Lorentz and Hermann Minkowski, and culminated with the special relativity by Albert Einstein in 1905. According to this theory, no mass can exist whose velocity may diverge, the limiting speed being the speed of light. The classical Hamilton-Jacobi equations and Galilean transformations left the place to the Klein-Gordon formulation and Lorentz covariances and contravariances (the natural metric being Minkowskian) [38]. Clearly, classical mechanics were still a good reference framework for the vast majority of the data collected (much like the positive cooperativity accounted for the bulk of the empirical data in the chemical counterpart); however, there were rare phenomena (e.g., a muon decay in atmosphere [39]) that required a broader scaffold which, in the opportune limits, could recover the classical one.

Although this historical connection between Chemical Kinetics and Classical Mechanics may look weird at a first glance, as we will prove, there is a formal analogy between their mathematical representations. In the next section we will summarize the main results concerning the analogy at the classical level. More sharply, the saturation plot of classical (positive-cooperative) chemical kinetics (namely, the input-output relation between the saturation function and the concentration of the substrate) can be derived by a minimum action principle that is the same that holds in classical mechanics, when describing a mass motion in the Hamilton-Jacobi framework. In this parallelism, the saturation function in Chemistry plays as the velocity in Physics: thus, exactly as what happens in special relativity, the velocity of the mass is bounded (by definition, the saturation function can not exceed one). Indeed, we can follow this mathematical equivalence and verify that there is actually a natural broader formulation for chemical kinetics that is exactly through the Klein-Gordon setting (rather than its classical Hamilton-Jacobi counterpart) and the theory as a whole is Lorentz-invariant. Remarkably, when read with chemical glasses, this extended relativistic setting allows for the anticooperative corrections that Koshland revealed in the study of the yeast enzymes, resulting in a complex mixture of positive and negative cooperation among binding sites.

2. The Standard Mathematical Scaffold for Classical Cooperativity

As anticipated in Section 1.1, cooperativity is a widespread phenomenon in Chemistry and its underlying mechanisms can be multiple: for example, if the adjacent binding sites of a protein can accommodate charged ions, the attraction/repulsion between the ions themselves may result in a positive/negative kinetics; in most common cases, the bonds with the substrate modify the protein conformational structure, by influencing possible further links in an allosteric way [21, 40]. Whatever the origin, cooperativity in Chemistry is a typical emergent property that directly relates the microscopic description of a system at the single binding-site level, with the macroscopic properties shown by its constituent molecules, cells, and organisms; thus the use of Statistical Physics for its investigation appears quite natural [26, 28]. Usually, in Statistical Physics one is provided with (inverse) temperature and with Hamiltonian (i.e., a cost-function) describing the model at the microscopic level, namely, in terms of elementary variables , couplings among elementary variables and external fields acting over these. The goal is to obtain the free energy of the model, from which the average value of the macroscopic observables can be directly derived [26].

2.1. Formulation of the Problem: The Thermodynamical Free Energy

In the following we summarize the minimal assumptions needed when modelling chemical kinetics from the Statistical Physics perspective; for a more extensive treatment of this kind of modelling we refer to [21, 26, 28, 29, 41], while for a rigorous explanation of the underlying equivalence between Statistical Mechanics and Analytical Mechanics we refer to the seminal works by Guerra [42], dealing with the Sherrington-Kirkpatrick model (and then deepened in, e.g., [4346]), and by Brankov and Zagrebnov in [47], dealing with the Husimi-Temperley model (and then deepened in, e.g., [4851]).(i)Each binding site may or may not be occupied by a ligand: this allows us to code its state (empty versus full) by a Boolean variable. For the generic site, we will use an Ising spin , where represents an empty site, and vice versa; means that the site is occupied. Clearly, if there are overall binding sites, .(ii)It is rather inconvenient (and ultimately unnecessary) to deal with the whole set if we are interested in the properties of large numbers of these variables (i.e., in the so-called thermodynamic limit corresponding to ). If we want to distinguish between a fully empty state (ordered case), a fully occupied state (ordered case), and a completely random case where with equal probability (disordered case), it is convenient to introduce the order parameter for these variables as the magnetization (this term stems from the original application of the Statistical Mechanics model in the context of magnetism) that reads as the arithmetic average of the spin state, namely, There is a univocal relation between the magnetization in Physics and the saturation function in Chemistry, where, we recall, we denote with the fractional occupation of the binding sites. In fact, one has [28, 29] Equation (5) constitutes the first bridge between the Chemistry we aim to describe (via the saturation function ) and the Physics that we want to use (via the magnetization ).(iii)All the binding sites interact with the ligand by the same strength. This is a standard assumption in Chemical Kinetics [29, 31, 52] and it means that the diffusion of the ligands is fast enough to ensure a homogeneous solution. The concentration of free ligands is mapped into a one-body contribution in the cost-function. This term encodes for the action of an external magnetic field in such a way that, if the field acting on is positive, the spin will tend to align upwards (namely, this direction is energetically favored), and vice versa. This homogenous mixing assumption translates into a homogeneous external field , and the related contribution reads as Notice that plays as a chemical potential and, consistently, it can be related to the substrate concentration as

being the value of the ligand concentration at half saturation.Equation (7) constitutes the second bridge between the Chemistry we aim to describe (via the ligand concentration ) and the Physics that we want to use (via the magnetic field ).(iv)The binding sites can cooperate in a positive manner: this can be modelled by introducing a coupling between the variables. The simplest mathematical form is given by a two-body contribution in the cost-function. This term encodes for the reciprocal interactions among binding sites and it reads as where is the interaction strength and the sum runs over all possible pairs; the normalization factor ensures the linear extensivity of the cost-function with respect to the system size. A positive value for implies an imitative interaction among binding sites: configurations where spins tend to be aligned each others (namely, where sites tend to be either all occupied or all unoccupied) are energetically more favoured and will therefore be more likely.(v)Combining together the previous contributions we get the total Hamiltonian: It is possible to introduce the free energy associated with such a Hamiltonian aswhere is the inverse temperature in proper units and the sum runs over all possible configurations. The free energy is a key observable because it corresponds to the difference between the internal energy and the entropy (at given temperature), that is, . If we could obtain an explicit expression for in terms of the order parameter , we could obtain an expression for the magnetization expected at equilibrium by imposing ; in fact, this implies that we are simultaneously asking for the minimum energy and the maximum entropy.Notice that, having stated the two bridges given by (5) and (7), other mappings between the two fields (e.g., the relation between the coupling strength and the Hill coefficient ; see (24) later on) emerge spontaneously as properties of the thermodynamic solutions of the problems.
2.2. Resolution of the Problem: The Mechanical Action

We want to find an explicit expression (in terms of ) for the free energy defined in (10). To this task let us rename and and let us think of these fictitious variables as a time and a space, respectively. Thus, we can write the free energy as where we also wrote , which implies vanishing corrections in the thermodynamic limit. If we work out the spatial and temporal derivatives of the free energy (12) we obtainwhere the average for a generic observable depending on the spin configuration is defined asand, posing and , the Boltzmann average for the original system (9) is recovered and this shall be simply denoted as

If we now introduce a potential , defined as half the variance of the magnetization, that is, we see that, by construction, the free energy of this model obeys the following Hamilton-Jacobi equation: and therefore is also an action of Classical Mechanics. We can simplify the previous equation by noticing that, for large enough volumes, the magnetization is a self-averaging quantity [26, 43]; thus in the infinite volume limit the potential must vanish; that is, . Here, we are restricting to large volumes and we are therefore left with a Hamilton-Jacobi equation describing a free propagation; since the potential is zero, the Lagrangian coupled to the motion is just the kinetic term: that is, the analogous of the classical formula , where the mass is set unitary (i.e., ), and the role of the velocity is played by the average magnetization . Solving the Hamilton-Jacobi equation is then straightforward: the solution is formally written as The evaluation of the Cauchy condition is trivial because, at , the coupling between variables disappears (see (10)), while the integral of the Lagrangian over time reduces to the Lagrangian times time (as the potential is zero). Pasting these two contributions together we obtain Finally, noticing that the equation of motion is a Galilean trajectory as (hence ) and recasting the solution back in the original variables, that is, and , we get the free energy associated with this general positive-cooperative reaction: By extremizing with respect to we get This result recovers the well-known self-consistency equation for the order parameter of the Curie-Weiss model in Statistical Mechanics [26, 43].

2.3. Chemical Properties of the Physical Solution

The self-consistent equation in (21) is an input-output relation for a general system of binary elements, possibly positively interacting, under the influence of an external field: the input in the system is the external field and the output is the magnetization . We can rewrite (21) in a chemical jargon by using the bridges coded in (5) and (7) and fixing, for the sake of simplicity, ; that is, Before proceeding, we check that if cooperation disappears (i.e., binding sites are reciprocally independent), the Michaelis-Menten scenario is recovered. Posing in the equation above we get that is (apart a constant factor that can be reintroduced by taking , rather than ), the Michaelis-Menten equation (see (2)).

One step forward, we now take into account the coupling and relate it to the Hill coefficient . The latter is defined in Chemistry as the slope of at half saturation (i.e., when ), and we can obtain its expression following this prescription by using (22), namely, We note that as we get, as expected, : if there is no cooperation between binding sites, the Hill coefficient must be unitary; further, the stronger the coupling , the (hyperbolically) larger the value of the Hill coefficient. In particular, for the kinetics get ultrasensitive and discontinuities emerge. We remark that, with simpler statistical mechanics model as linear chains of spins, phase transitions are not allowed; hence ultrasensitive behavior can not be taken into account: the present framework is the simplest nontrivial scheme where all these phenomena can be recovered at once (see Figure 1 and [28] for more details on ultrasensitive kinetics).

Also, it is worth highlighting the full consistency between our treatment of ultrasensitive kinetics and more standard ones as, for instance, reported in [2] (see eq. therein), where the expression for the Hill coefficient can be translated into our formulation as We see that for the Hill coefficient diverges, which is the signature of an ultrasensitive behavior: this is perfectly coherent with our approach where, in that limit, the input-output relation (see the hyperbolic tangent (22)) becomes a step function.

However, as mentioned in the Introduction, this theory has its flaws, in Chemistry as well as in Mechanics. Regarding the former, the complex picture of yeast’s enzymes evidenced by Koshland [32, 53], where positive and negative cooperativity appear simultaneously (and with the anticooperativity effect getting more and more pronounced as the substrate concentration is raised), still escapes from this mathematical architecture. Further, from the mechanical point of view, two weird things happen: the velocity is bounded by , while in Classical Mechanics the velocity may diverge; further, if we look at the Boltzmann factor in the free energy (see (12)), this reads as and, recalling that the kinetic energy in this mechanical analogy reads as (the mass is unitary, thus velocity and momentum coincide), we are allowed to interpret as a real action. From this perspective, the exponent can be thought of as the coupling between the stress-energy tensor and the metric tensor: a glance at the form of the Boltzmann factor reveals that the natural underlying metric is rather than as in classical Euclidean frames, or in other words, it is of the Minkowskian type. All these details point toward the generalization of the equivalence including special relativity.

Plan of the next section is to follow the mechanical path and extend the classical kinetic energy including relativistic corrections and then to investigate its implications. We will see that in the broader, relativistic framework for chemical kinetics the deviations that Koshland explained adding an anticooperative interactions, beyond the cooperative ones, at high ligand’s doses are the chemical analogies of the deviation from classical mechanics at high velocities observed in special relativity.

3. The Generalized Mathematical Scaffold for Mixed Cooperativity

3.1. Relativistic Setting

The relativistic extension of the Hamiltonian (9) is defined by Hamiltonian of the formwhere as usual. Next, we have to insert (26) into the free energy (10):where we already replaced and in order to work out their streaming that read aswhere the Boltzmann averages are defined as (using the magnetization as a trial function)As before, the averages will be denoted by whenever evaluated in the sense of thermodynamics (i.e., for and ). By a direct calculation, it is straightforward to see that expression (27) obeys the relativistic Hamilton-Jacobi equation: where the symbol represents the D’Alembert operator and is the potential whose expression is chosen in order to make the equation valid by construction and, this time, it is automatically Lorentz invariant. If the functional is sufficiently smooth (i.e., its derivatives are regular functions of and ), in the thermodynamic limit, we have ; hence in this high-volume limit we are left withwhich is the Klein-Gordon equation for a free relativistic particle with unitary mass in natural units ().

In relativistic mechanics, the stress energy tensor of this particle is defined aswhere is the classical velocity of the particle, , and is the relativistic energy. In addition, the contravariant momentum is expressed through the action by the following equation:Comparing (32) and (33), it is immediate to identify the magnetization as the relativistic dynamical variable:while the Lorentz factor isIn the thermodynamic limit, the particle is free and its trajectories are the straight lines . Since the relativistic Lagrangian is constant along these classical trajectories, the free energy can be computed asSetting and , we finally get an explicit expression for the free energy:Requiring that the free energy is extremal with respect to the magnetization (from a thermodynamical perspective this condition can be seen as the simultaneous effect of the minimum energy and the maximum entropy principles and from a mechanical perspective as the minimum action principle), the associated self-consistency equation becomes

3.2. The Classical Limit from a Chemical Perspective

Reading the self-consistency (38) in chemical terms, that is, using the bridges (5) and (7), we obtainWe can now check whether, under suitable conditions, this broader theory recovers the classical limit. First, we notice that under the assumption of no interactions among binding sites (i.e., ) and replacing , the Michaelis-Menten behaviour is recovered. This can be shown by rewriting (39) aswhere we also shifted for simplicity. For the previous equation reduces to . Further, taking the classical limit, at the lowest order, we have the following expansions:such that (38) reduces to (21), in the physical context, and to (22), in the chemical context. Clearly, also the slope at is preserved; hence, in the classical limit, we recover the expected expression for Hill coefficient (see (24)), namely,

3.3. Beyond the Classical Limit

To understand why we expect variations with respect to the Hill paradigm at relatively large values of the substrate concentration, we must check carefully the relativistic self-consistency (38). Let us assume we are working at not too high velocities (i.e., ) and we can expand the argument inside the hyperbolic tangent; in particular, approximating , we get The relativistic effects in chemical kinetics become transparent in this way: if we look at the field felt by the binding sites (i.e., the argument inside the hyperbolic tangent), we see that, beyond the standard Hill term (that positively pairs binding sites together), another term appears that, this time, negatively pairs binding sites together. Retaining this level of approximation, we could write an effective Hamiltonian to generate (43) that reads as hence, beyond the two-body positive coupling coded by the first term, another four-body negative coupling appears. The latter is responsible for the deviation from the classical paradigm and these deviations are in full agreement with the Koshland generalization toward the concept of mixed positive and negative cooperativity [32].

In particular, we can see at work the entire reasoning of Koshland who pointed out how, at large enough substrate concentration, the positivity of the reaction diminishes. In fact, for no relativistic effect can be noted. By increasing (the input in the system), we get a growth in (the output in the system): the latter raises in response of and it is enhanced because of the two-body term in the effective Hamiltonian (44), the four-body term still being negligible. As keeps on growing, increases as well, up to a point where it reaches high enough values such that, from now on, also the four-body term inside the effective Hamiltonian (44) becomes relevant. At this point, a novel, anticooperative effect is naturally induced in the reaction and it yields to a reduction of the Hill coefficient. In the next analysis these qualitative remarks shall be addressed in more details.

We focus on the definition of the Hill coefficient based on the Hill equation: This equation accounts for the possibility that not all receptor sites are independent: here is the average number of interacting sites and the slope of the Hill plot. The latter is based on a linear transformation made by rearranging (45) and taking the logarithm: Thus, one plots versus , fits with a linear function and the resulting slope, calculated at the half-saturation point, and provides the Hill coefficient. As already underlined, the Michaelis-Menten theory corresponds to and any deviations from a slope of tell us about deviation from that model.

For the (classical and relativistic) models analyzed here (coded in the Hamiltonians (9) and (26)) we can estimate the slope directly from the self-consistency equations (22) and (39). Let us start with the classical model. We preliminary notice that Therefore, we just need to evaluate in , which reads as Posing and noticing that when , we have By plugging this result in (47), we finally have One can see that when the Hill coefficient is unitary as expected for noncooperative systems, when the coefficient is larger than 1, indicating that receptors are interacting, and when the coefficient is smaller than 1, as expected for negative cooperativity.

Let us now move to the relativistic model. Again, we just need to evaluate in , which, recalling (39), reads asExploiting the fact that when , the previous expression simplifies as Thus, we can write Note that , confirming that the relativistic correction weakens the emerging cooperativity.

3.4. Further Robustness Checks

As stressed above, for a fixed interaction coupling , the relativistic model is expected to exhibit a lower cooperativity with respect to the classical model. In order to quantify this point we considered different quantifiers for cooperativity and we compared the outcomes for the relativistic and the classical models set at the same value of . Let us start with the Koshland measure of cooperativity which is defined as the ratio (notice that the Koshland index is actually strongly related to the Hill coefficient (see, e.g., [2])) where denotes the substrate concentration corresponding to a saturation, while denotes the substrate concentration corresponding to a saturation; that is, and . In the noncooperative case one has and, accordingly, if the ratio is smaller than (meaning that the saturation curve is relatively steep) one has positive cooperativity, while if the ratio is larger than one has negative cooperativity. The advantage in using the index is that it can be easily measured, yet it ignores all information that can be derived from the shape of . In particular, this quantifier can be estimated starting from a Klotz plot (see, e.g., Figure 2(a)) where the saturation function is shown versus the logarithm of the (free) ligand concentration; in the presence of positive cooperativity this plot yields to a characteristic sigmoidal curve. For the models analyzed here we can estimate directly from the self-consistency equations (22)–(24), (26)–(39). Starting from the classical model and posing and we get, respectively,and, with some algebra (recalling ),that is, Of course, when we recover the value , when we get , and when we get .

Repeating analogous calculations for the relativistic model we getand, with some algebra,that is, Again, one can check that when we recover the value , when we get , and when we get . Also, . This means that, even with this quantifier, when fixing the same coupling constant , the emerging cooperativity is weaker for the relativistic model, as expected.

Next, let us consider the cooperativity quantifier derived from the Scatchard plot. We recall that this plot is built by showing the behavior of with respect to . In fact, according to the simplest scenario (this corresponds to the Michaelis-Menten theory and to Clark’s theory and it requires a set of simplifying assumptions, among which the interaction is reversible; all the binding molecules are equivalent and independent; the biological response is proportional to the number of occupied binding sites; the substrate only exists in either a free (i.e., unbound) form or bound to the receptor), at equilibrium, one can write where is the proportionality constant between response and occupancy (i.e., it is the ratio between the dissociation and the association constants), and rearranging (62) we have The previous expression fits the equation of a line for versus , whose slope is . The advantages in using the Scatchard plot is that it is a very powerful tool for identifying deviations from the simple model, which, without deviations, is represented by a straight line. In particular, a concave-up curve may indicate the presence of negative cooperativity between binding sites, while a concave-down curve is indicative of positive cooperativity. Also, in the latter case, the maxima occurs at the fractional occupancy which fulfills where provides another quantifier for cooperativity.

Starting from the classical model, we can build the function , by first getting as a function of , and can be obtained by inverting formula (22); namely, By deriving with respect to we get which is identically equal to when , monotonically decreasing with when and monotonically increasing with when . The (possible) root therefore provides the extremal point; that is, and, comparing with (64), we get

We now repeat analogous calculations for the relativistic model. First, we get as a function of , by inverting formula (39), namely, By deriving with respect to we get which is again identically equal to when , but it is no longer monotonic when . More precisely, by studying we can derive that when is relatively small, does not exhibit any extremal points, but there is a flex at intermediate values of ; for intermediate values of there is a minimum at small values of and a maximum at larger values of ; for large values of there is a maximum. The extremal points can be found as roots of a degree function of . We can obtain an estimate of the value corresponding to the maximum by recalling and neglecting high-order terms. In this way we get and, comparing with (64), we get

The three plots considered here (i.e., Klotz, Scatchard, and Hill) and the related estimates for the extent of cooperativity are presented in Figure 2. In particular, in (d) we compare the cooperativity quantifiers for several values of : as anticipated, in general, for a given value of , the relativistic model gives rise to a weaker cooperativity.

We proceed our analysis by deepening the role of the coupling constant in the binding curves related to the two models. In Figure 3 we present Klotz’s plot (a), the Scatchard plot (b), and the Hill plot (c) for the relativistic and the classic models, comparing the outcomes for different values of . As expected, the point corresponding to and is a fixed point in each plot and, in general, the gap between the two models is enhanced when is larger (i.e., when is closer to ). Also, when is not too small, the Scatchard plot for the relativistic model displays a flex at small values of suggesting that when the saturation is small, the cooperativity is not truly positive.

In the final part of this section we want to get deeper in the comparison between the classical and the relativistic models. To this aim, we solved numerically (39), for different values of and of , getting a set of data . We can think of this set of data as the result of a set of measurements where we collect the saturation value at a given substrate concentration. Now, assuming that in this experiment we have no hints about the underlying cooperative mechanisms, we may apply the formulas for the plain positive cooperativity and infer the value of . More practically, we calculate numerically from the relativistic model for different values of and of the coupling strength, referred to as for clarity. Next, we manipulate the set of data by inverting the formula in (22): as the value of is assumed to be known, we can derive the coupling strength, referred to as , expected within a classical framework. In this way, we can compare the original coupling constant with the inferred one . We can translate these procedures in formulas as follows:with equality holding only when .

In Figure 4(a) we plot versus , for different values of . Notice that the two parameters are related by a linear law, whose slope is smaller than and decreases with . This confirms that the relativistic model yields to a weak cooperativity. The negative contributions in the relativistic model get more effective when and are large, as further highlighted in Figure 4(b).

4. Conclusions

The rewards in the overall bridge linking Chemical Kinetics and Analytical Mechanics are several, both theoretical and practical, as we briefly comment.

The former lie in a deeper understanding of the mathematical scaffold for modelling real phenomena: it is far from trivial that the description of chemical/thermodynamical equilibrium is formally the same as the mechanical one. In particular, the self-consistency relation (38) that emerges from the thermodynamic principles (in fact, it stems from the requirement of simultaneous entropy maximization and energy minimization) also turns out to be, in the mechanical analogy, a direct consequence of the least action principles . This means that the stationary point corresponds to a light perturbation of the evolution of the system in the interval . Explicitly, we shift infinitesimally ; thenfrom which (38) is recovered (as usual by setting and ), since this holds for all variations .

Even more exciting, still by the theoretical side, is the realization of the complexity of systems presenting mixed reaction (i.e., where both positive and negative cooperativity are simultaneously at work) and the possible applications in information processing, as we are going to discuss.

First, let us clarify that in the Literature we speak of complex network or complex system with (mainly) two, rather distinct, meanings: in full generality, let us consider a Hamiltonian as and let us write the two-body coupling matrix as , where is the adjacency matrix, accounting for the bare topology of the system (its entry is if there is a link connecting the related nodes , which are therefore allowed to interact each other, and it is zero otherwise) and is the weight matrix, accounting for the sign and the magnitude of the links (i.e., the type of interactions among binding sites).

Dealing with , networks where the topology is very heterogeneous (e.g., the distribution of the number of links stemming from a node has a power-law scaling) are called complex networks, as it is case for the Barabasi-Albert model [54].

Dealing with , networks where the entries of the weight matrix are both positive and negative are termed complex systems, as the Sherrington-Kirkpatrick model [55] for the so-called spin glasses.

Crucially, spin glasses spontaneously show very general information-processing skills and computational capabilities: for instance, Hopfield neural networks [56] and restricted Boltzmann machines [57], key tools in Artificial Intelligence (resp., in neural networks and machine learning), are two types of spin glasses and it is with this last definition of complexity that we now can read the information processing capabilities of the elementary reactions we studied. For a given macromolecule under consideration, we could paste each binding site on a node and draw the links among nodes that are interacting: if two nodes are correlated (they show positive cooperativity), their relative interaction is positive, while if two nodes are anticorrelated (they show negative cooperativity), their relative interaction is negative. Dealing with mixed reactions we have to deal with spin glasses and we can thus assess how much information has been processed in a given reaction by evaluating the amount of information processed in its corresponding spin-glass representation, using our bridge. We have already started this investigation in [21, 28, 41].

Finally, from a practical perspective, in the classical limit (i.e., for simple reactions) we have an explicit expression that directly relates the Hill coefficient , which can be measured experimentally, to the interaction coupling assumed in the model; namely, . This allows designing specific models and very simple validations (at least at this coarse-grained level) and gives a new computational perspective by which analyze already developed ones (see, e.g., [5862]). Then, regarding complex reactions, the puzzling scenario, evidenced by Koshland, finally finds out a simple descriptive framework that, crucially, also recovers to the standard cooperative scenario in the proper limit: full coherence among various, apparently antithetic, results is obtained within a unique framework.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

Elena Agliari and Adriano Barra are grateful to INdAM-GNFM for partial support via the project AGLIARI2016. Adriano Barra also acknowledges MIUR via the basal founding for the research (2017-2018) and Salento University for further support.