Abstract

The extreme sensitivity of rheology to the microstructure of polymer melts has prompted the development of “analytical rheology,” which seeks inferring the structure and composition of an unknown sample based on rheological measurements. Typically, this involves the inversion of a model, which may be mathematical, computational, or completely empirical. Despite the imperfect state of existing models, analytical rheology remains a practically useful enterprise. I review its successes and failures in inferring the molecular weight distribution of linear polymers and the branching content in branched polymers.

1. Introduction

Complex fluids, which include materials such as foams, gels, colloids, polymers, and emulsions, lie somewhere in the continuum between ideal solids and liquids. When subjected to an external deformation, they exhibit elasticity or memory-like classical solids; they also relax and dissipate energy by viscous flow-like classical liquids. This explains why these materials are called “viscoelastic.” The systematic study of the flow behavior of such materials is called rheology [13]. More precisely, rheology is the relationship between stresses generated within a viscoelastic material, in response to an applied deformation.

1.1. Analytical Rheology

The rheology of many complex fluids depends very sensitively on the material microstructure. For example, the rheology of oil-water emulsions, which is important in food, cosmetic, and drug formulation industries, is determined by the concentration and particle size distribution [46]. This essential link between the microstructure and rheology provokes the idea of analytical (or analytic) rheology, which seeks to infer microstructural information from viscoelastic measurements. The primary motivation for analytical rheology, as elaborated more specifically later, is threefold: (i) many important microstructural features are extremely hard to probe using standard analytical techniques, (ii) viscoelastic measurements are often much more sensitive particularly to the microstructural features of most interest, and (iii) in the linear regime, rheology is cheaper and more convenient to measure.

In order to realize the promise of analytical rheology, we require a reasonably accurate rheological model, which may range from completely empirical models to those purely based on a molecular understanding of the underlying physics (see Figure 1). These models take, as their input, details of the material microstructure and the nature of the applied deformation and yield, as their output, a prediction of the rheological response. Analytical rheology, on the other hand, seeks to perform the inverse operation and hence is sometimes referred to in the literature as inverse rheology. Given the rheology of an unknown sample, it uses a rheological model, to provide details of the microstructure.

It should be pointed out that, while there is considerable overlap, the goal of analytical rheology is somewhat different from that of in silico design of polymeric materials [711]. Analytical rheology is a diagnostic tool, which is useful to determine the microstructure of a particular sample. The goal of in silico design is to determine what the microstructure should be, given a desired material response. Put differently, in “analytical” rheology, the analysis is performed after the measurement of rheology, while it is performed before experimental synthesis in in silico “design.” Despite this subtle difference, they both employ similar material modeling and computational methods to achieve their goals, and there is substantial cross-talk between them.

Recent advances in analytical rheology for all the different kinds of complex fluids are too broad a topic to cover adequately. Hence, in this paper, we will focus on a particularly important class of complex fluids, namely, polymer melts, which have seen tremendous progress in both rheological modeling and analytical rheology, over the past 30 years. In addition, purely in terms of commercial impact, polymers are among the most important members of the complex fluids family.

1.2. Polymers

Since the development of the first truly synthetic polymer, Bakelite, in 1909, synthetic polymers have come to pervade every aspect of our lives—ranging from automobiles to health-care, and from packaging to the consumer electronics industry [12]. Synthetic polymers are lightweight, corrosion resistant, easy to process into complex shapes, and possess properties that can optimize for specific applications by engineering the chemistry and spatial orientation of monomers.

Among synthetic polymers, polyolefins, which include polyethylene and polypropylene, constitute an important class. Together, they account for more than half of the synthetic polymer produced worldwide on a volume basis, with the current annual global production hovering around 100 billion kilograms. Our treatment in this paper will reflect a bias towards polyethylene and polypropylene due to their status as commercial juggernauts.

1.2.1. Architecture

While linear polymers, which consist of monomers chained together, are quite common, there is a rich diversity of polymer architectures, including ring and hyperbranched polymers (Figure 2). Theoretical understanding and industrial applications of synthetic ring polymers are currently underdeveloped. Hence, we will not cover them in this paper. When side branches are synthetically attached to a linear polymer (LP), we get branched or comb-like polymers. Branched polymers (BPs) are of immense technological importance, especially for polyolefins, and have been studied extensively over the past 60 years [1318]. Besides direct implications for processability, branching can also affect attributes such as tack and peel properties of pressure-sensitive adhesives [19]. Branching also plays an important physiological role in natural polymers such as starch, which is a polysaccharide consisting of glucose units chained together by glycosidic bonds, where it determines digestibility [20]. In highly branched structures, such as dendrimers and other hyperbranched polymers [21], branching enables applications in a variety of industries that include coatings, additives, drug and gene delivery, and nanotechnology.

When the length of the side arms is more than a few hundred carbon atoms (about 2-3 times the entanglement molecular weight ), the branching is classified as long-chain branching (LCB) [22, 23]. It is classified as short-chain branching (SCB), when the arms are shorter.

Both SCB and LCB affect polymer properties, but the manner in which they do so is relatively independent of each other. Thus, the length and distribution of SCB can be tuned to control properties [2432] such as crystallization kinetics, morphology, blend miscibility, mechanical properties of blown or compression molded films such as tear and impact strengths, and resistance to crack growth. Ultimate mechanical properties and crack resistance improve as the length of the side branch increases, possibly because SCB influences morphology and tie-molecule concentration, requiring greater energy to be absorbed to initiate cracks [33]. SCB is usually detected and quantified using analytical techniques like solution differential scanning calorimetry (DSC), temperature rising elution fractionation (TREF), and crystallization analysis fractionation (CRYSTAF) [3439].

Compared to LCB, the effect of SCB on rheology is very weak [1840]. On the other hand, the effect of LCB, on crystallization and ultimate mechanical properties, is relatively small. However, LCB has an enormous impact of rheological properties. It has long been known that the addition of trace amounts of long-chain branching (LCB) produces dramatic changes in the linear and nonlinear rheology [4045]. These well-documented effects include a departure from the “3.4 power law” relating the zero shear viscosity to the weight-averaged molecular weight, unusually large sensitivity of to temperature or higher flow activation energies [46], enhanced shear thinning and strain hardening that lead to improved tear resistance, processability, and so forth.

What is the practical importance of LCB? Consider the illustrative example of isotactic polypropylene, which has many desirable solid-state properties such as high melting point, tensile strength, and stiffness. Unfortunately, it has poor melt strength, which seriously impairs its ability to be processed by foaming, blow molding, thermoforming, and so forth. Adding LCB fixes the melt strength. LCB may be introduced synthetically, by postreactor chemical modification [4749], or by electron beam irradiation [5052].

In an industrial setting, this dichotomy of SCB and LCB is helpful, since the levels of both types of branching can be independently adjusted to control rheology (important for melts during processing) and crystallinity (important for ultimate solid-state properties), independently.

1.3. Scope and Organization

For the most part, we will confine ourselves to the linear viscoelastic regime, which is the regime of small deformations. In this regime, the material obeys the principle of superposition. Our theoretical understanding of the physics, and ability to make quantitative predictions based on microscopic models are most mature in this linear regime. In addition, experiments in the linear viscoelastic regime (usually small angle oscillatory shear) are relatively easy to perform, which makes it an ideal candidate for analytical rheology. Nonlinear rheology, which depends on both the deformation history and the flow type, is sometimes far more sensitive to some features of molecular structure, such as LCB. However, theoretical developments in this regime have not fully matured, partly due to the difficulty in obtaining high-quality data in the strongly nonlinear regime [53].

I will review the specific advantages of analytical rheology of LPs and BPs in the next section. I will then present a brief summary of different molecular models used to study polymer dynamics and rheology and argue why the tube model is an attractive candidate as the rheological model to invert. For LPs, I will focus on the inversion of linear viscoelastic spectra to reconstruct the molecular weight distribution and point out the different techniques and challenges. For BPs, I will review different experimental and model-driven methods to infer the level of LCB from linear and nonlinear viscoelastic measurements. This material is covered in Section 4.

2. Motivation and Background

As described next, the primary motivation for analytical rheology of LPs is convenience, cost, and resolution. For BPs, analytical rheology is often a necessity—a technique of last resort—due to the lack of alternatives.

2.1. Linear Polymers

The problem of analytical rheology for LPs has a relatively straightforward mathematical description: given the linear viscoelastic response of a mixture of LPs, typically in the form of oscillatory shear measurements, determine the molecular weight distribution. In the most general case, the mixture may be both multimodal (having multiple peaks) and polydisperse (having broad distributions). In the literature, terms like “bidisperse” are commonly used to mean “bimodal.” For clarity, we will avoid this terminology. It is quite natural to ask, given the availability of techniques such as size-exclusion chromatography (SEC) and light scattering, “what does analytical rheology offer that these other techniques do not?”

All of these traditional analytical methods for determining molecular weight distribution require dissolving the polymer in a solvent, ideally at room temperature, which can be tricky for many industrially relevant polymers such as polyethylene, polypropylene, Teflon, and others [5459]. Rheological measurements circumvent this nontrivial and time-consuming dissolution step and are indeed straightforward to perform on essentially all commercially important polymers.

A second important factor in favor of rheology is its sensitivity to the high molecular weight tail of the distribution, which is often important to characterize accurately, because of its outsized influence on processability. The sensitivity of rheology to molecular weight is unparalleled. For example, doubling the molecular weight of a moderately entangled LP causes the zero-shear viscosity to increase approximately 10-fold. For a similar doubling of the arm-molecular weight of a symmetric star polymer, the zero-shear viscosity increases a dramatic 1000-fold. In contrast, the molecular weight dependence of SEC () and light scattering () are much weaker. The dependence of other linear and nonlinear material functions on molecular weight is also quite pronounced. This strong dependence can, in principle, be directly translated into analytical sensitivity and resolution, especially at higher molecular weights where competing analytical methods are particularly ill-suited [54, 60, 61].

From an industrial standpoint, perhaps no other driving force is as important as cost. Here too, rheology has an important advantage over chromatography; it typically costs only about a third of the cost of SEC [62]. In addition to these obvious advantages, carrying out routine linear viscoelastic measurements has simultaneously become more routine and accurate, enabling nonexperts to perform experiments competently. Consequently, analytical rheology of LPs has made its way into some commercial rheometers.

2.2. Branched Polymers

Characterization of BPs with analytical rheology inherits all of the advantages listed above for LPs, such as the elimination of the solvation step and lower energy costs. In addition, the sensitivity of rheology to the molecular weight of LPs is replaced by an even greater sensitivity to the presence of LCB.

The nature and level of LCB in polymers depends on their synthesis and postsynthesis treatment. Consider, for example, low-density polyethylene (LDPE) which is synthesized using radical chemistry and has about 0.5–4.0 LCB/1000 C atoms, and high-density polyethylene (HDPE) which is synthesized using Ziegler Natta, Philips, or metallocene catalysts and approximately has 0.1–0.2 LCB/1000 C atoms in the backbone [63]. LDPE is highly branched, with significant amounts of both SCB and LCB, while metallocene HDPEs can have very well-controlled and predictable amounts of LCB and SCB.

Thus, today, breakthroughs in catalyst technology have allowed us to produce industrial quantities of polymers with precisely controlled molecular structure [64, 65]. However, these advances in chemistry have not been accompanied by simultaneous advances in the characterization of LCB. We are left with a situation where the resolution of standard analytical tools available for diagnosing LCB is much lower than our ability to synthesize sparse levels of precisely controlled branching. The need to develop analytical methods to accurately detect and quantify these trace levels of LCB has become increasingly urgent, and analytical rheology has emerged as a particularly promising candidate [66].

Historically, three categories of methods have been used to diagnose LCB in synthetic polymers: (i) -ratio analysis, (ii) spectroscopy, and (iii) rheology.

The presence of branch points causes conformational change in BPs, often causing them to shrink relative to the size of LPs. The “-ratio” in -ratio analysis is the ratio of the size of the branched and LPs of identical molecular weight. Theoretical estimates of for different polymer architectures are available via Zimm-Stockmayer equations [67, 68]. For example, the -ratio of a star molecule, , where is the number of branches. Similarly, for randomly branched comb polymers with backbones and branches of constant size, Casassa and Berry [69] obtained where is the ratio of the molecular weight of the branch and the backbone.

More accurate results are obtained from Monte Carlo simulations and other numerical techniques [7076]. The principal idea is to measure the size and molecular weight of a BP using experimental techniques such as SEC, light scattering, or intrinsic viscosity (Mark-Houwink plots), and so forth, and then use the available -ratios to determine the nature and degree of branching. The use of -ratio analysis in isolation to detect LCB is risky, because trace levels of LCB result in only a modest change (see Figure 3) in size [66, 77]. However, it is a useful tool when used in conjunction with more sophisticated multidetector methods [40, 78, 79], as described later.

C nuclear magnetic resonance (NMR) is a spectroscopic method that can be used to directly probe LCB [8085], by detecting tertiary C-atoms linking the branch to the backbone. Indeed, it is widely used as a gold standard against which the accuracy and sensitivity of other techniques are measured. However, 13C NMR is a questionable gold standard at best and suffers from several technical limitations. For example, NMR is unable to differentiate between short and long branches. In particular, it cannot distinguish between branches with more than 10 carbon atoms in the side chain. This length is far below the threshold of around 100–200 carbon atoms in “long-”chain branching [42, 8691].

In summary, both -ratio analysis and NMR are tedious and expensive and require specialized training in carrying out the necessary experiments. However, their most important drawback is that they probe fundamentally weak signals. Since the actual number of branch points is often a very small fraction of the total number of backbone carbons, the signal-to-noise ratio can become a serious issue [66, 92]. This opens up a natural niche for analytical rheology as molecular dynamics are extremely sensitive to molecular structure, and small changes in LCB levels result in major changes in rheology [93].

2.3. Model-Driven Analytical Rheology

Although analytical rheology requires reasonably accurate rheological models, they need not be perfect. This can be explained schematically using Figure 4. The extreme sensitivity of rheology on molecular structure can be conceptually visualized as a straight line with a large slope: small changes in the structure are manifested as large changes in the rheological response. The inverse problem of analytical rheology is visualized in this schematic as finding the “structure” corresponding to a given measured “rheology.” Since rheological models are not perfect, they are characterized by uncertainty which is depicted by the dotted lines parallel to the “true” model (red line). When the slope of the line is large, the inverse solution, even in the presence of uncertainty, is relatively tightly bracketed. Thus, the inversion of imperfect models in such situations can be practically meaningful. In contrast, if the structure-rheology dependence were more modest, the slope of the line in the conceptual diagram would be smaller. Given models of comparable reliability, we would not be able to bracket the “true” structure with as much confidence. This is a key insight. The premise of model-driven analytical rheology relies on the strong dependence of structure and rheology to make up for the limitations of the rheological models themselves.

3. Models for Polymer Dynamics and Rheology

In this section, we present a historical record of the modeling of polymer melt rheology before considering the most popular ansatz, namely, the tube model, more closely. The tube model and its modifications are ideally suited for inversion using analytical rheology because they offer a reasonable mix of accuracy and efficiency. We will end this section with a brief remark about other more accurate, but computationally intensive, methods for modeling polymer dynamics and rheology. In the future, with faster computers and better algorithms, it is conceivable that some of these models could serve as the forward model.

3.1. Historical Development

Rubber elasticity theory considered the number of configurations available to Gaussian strands constrained between permanent chemical cross-links to successfully describe the relationship between the stresses generated in a rubber-like material subjected to affine deformation [94101]. To account for the transient nature of polymer entanglements, Green and Tobolsky generalized the rubber elasticity theory by visualizing entanglements as physical cross-links that were continuously broken and reformed [98100, 102104]. The temporary network model with a single time constant is the simplest model for polymer melts.

While the network theory for such rubber-like liquids was being advanced by Lodge and Yamamoto [105110], progress was simultaneously being made in molecular modeling on the other end of the concentration spectrum, namely, dilute polymer solutions. Starting with Kuhn, who described the configuration of chains using a random coil model, landmark contributions were made, which suggested the importance of the chain-like structure of polymers [111117]. It was recognized that the connectivity of monomer units in a polymer chain should be directly represented in the molecular models and constitutive equations. Simultaneous advances in experiments and theory led to the discovery of universality: at time scales larger than those corresponding to motion of individual chemical bonds, dynamical properties of polymer chains of different chemical structures were similar. This observation greatly facilitated the development of coarse-grained descriptions of polymer dynamics, which continue to be popular to this day.

Rouse exploited the Gaussian character of polymer coils and developed the famous “Rouse model” by representing a polymer in dilute solution as a series of beads attached to its neighbors by springs sloshing around by thermal fluctuations in a viscous solvent [113]. The viscous drag was borne by the beads, and the elasticity was contributed by the springs. Interestingly, the Rouse model turned out to be a pretty poor model for dilute polymers—its intended target. However, it was found to be an excellent model for unentangled polymer melts, where its neglect of hydrodynamic coupling between different polymers was justified [118124]. Zimm improved upon the Rouse model by including hydrodynamic interactions which led to a better quantitative understanding [114].

For long linear chains in melts and concentrated solutions, the zero shear viscosity , where is the molecular weight of the linear chain, whereas in dilute solutions the exponent was much closer to unity. To explain the much stronger dependence of rheology on structure, decorations in the form of elastic coupling of the submolecules and enhanced friction due to surrounding chains were added to the Rouse model [125, 126]. However, the modifications were ad hoc and lost their connection to the molecular picture. The gridlock was broken after nearly two decades, when de Gennes [127] proposed the famous reptation model for a linear chain trapped in a permanent network. He suggested that the “test” chain could renew its configuration by slithering back and forth along the direction of the chain backbone like a snake.

This reptation or “tube” picture (shortly described in greater detail) was further developed by Doi and Edwards [128131] to concentrated polymer solutions, which enabled the computation of various dynamical properties and eventually led to the famous Doi-Edwards constitutive equation for linear chains. In branched molecules reptation is suppressed because the branch point anchors the molecule in the middle. Pearson and Helfand [132] considered the retraction of a side branch in the tube to describe linear viscoelastic data. These seminal efforts were modified to include the evolution of the tube itself which is produced by the relaxation of matrix chains. This accelerating effect in real polymer melts was captured by the application of the approximate double reptation model for linear chains [133135] and the dynamic dilution theory by Ball and McLeish [136] for BPs.

3.2. The Tube Model

de Gennes offered a phenomenological model to account for the slowdown in the molecular motion of linear chains in a melt for , where is the molecular weight between entanglements. Originally, he considered the motion of a long polymer chain in a fixed or cross-linked rubber network [127]. The analysis could then be generalized to describe the dynamics of a chain moving in a mesh formed by surrounding chains instead of the fixed network. Such a “probe” chain, submerged in a sea of polymers, could be thought of as being confined to a hypothetical “tube.” The tube permits the motion of the chain to be along the contour of the chain, while limiting its motion in the lateral direction to length scales of the order of the diameter of the tube (see Figure 5).

For an entangled system of polymer molecules in the linear viscoelastic regime, the chains relax their configurations by three mechanisms (i) reptation, (ii) contour length fluctuations, and (iii) constraint release.

Reptation is a method of stress relaxation that involves the curvilinear diffusion of a test chain along the axis of its tube, due to the uncrossability of topological constraints that the tube represents. Portions of the chain crinkle up within the tube creating slack or density fluctuations which diffuse through the tube and get released at the ends. When a linear molecule “snakes” its way out in this manner, it vacates portions of the original tube and relaxes the stress linked with those portions. After a system is subjected to an initial strain, the isotropic equilibrium distribution of chains is disturbed and the associated anisotropy generates stress. As parts of the chain project out of the original tube, they seek new tubes (get reentangled with other matrix chains) which unlike the original tube are isotropic and hence are not related with any stress. The unrelaxed stress at any time, therefore, is proportional to the fraction of the original tube that remains unvacated [104].

The second mechanism of stress relaxation is called contour-length or primitive-path fluctuations. Star polymers cannot reptate, because the branch point anchors the molecule down. The chain end of a star arm has to execute large entropically unfavorable primitive path fluctuations towards the tether point and then “breathe out” to explore a new tube. Just as in reptation, when the molecule reexpands into a new tube, the stress linked with the original tube is relaxed. Pearson and Helfand [132] considered the steep entropic barrier faced by a star arm attempting to retract towards the branching point. In contrast to linear chains in which the relaxation time, , it has been shown for star arms [132, 137] that the relaxation time exhibits an exponential dependence on the arm molecular weight, . Primitive-path fluctuations are a dominant mode of relaxation in polymers with more complicated branching architecture like H-polymers and combs. The relaxation of the backbone of a H-polymer occurs after the arms have completely relaxed by a retraction process similar to that described above for star arms.

As a probe chain reptates in its tube, the surrounding chains which form the tube also reptate in their own tubes. When a portion of a matrix chain slips past the probe chain, it releases the topological constraint it hitherto imposed on the probe chain and the latter is then free to reorient [138140]. The movement of the matrix chains thus induces motion of the tube confining the probe chain, which, in turn, allows large scale motion of the probe chain in the lateral direction. The increased latitude for lateral motion can also be viewed as an increase in the effective diameter of the tube around the test chain. This increase in tube diameter due to simultaneous relaxation of matrix chains is incorporated into the tube model for BPs by using the notion of dynamic tube dilution [136].

3.2.1. Hierarchical Models

These three concurrent molecular processes can be weaved into a self-consistent mathematical framework to build computational models [93, 141151]. The framework and models are too complicated and numerous to describe in detail here, but it is useful to point out the idea of hierarchical relaxation which, besides the “tube,” is central to all these models.

For complicated branched architectures, it has been postulated and found from computer simulations [152, 153] that stress relaxation proceeds in a “hierarchical” manner. The tips of the outer-most branches are the first to relax via contour-length fluctuations. Once they have completely relaxed, they are modeled as viscous drag elements on the branches (or backbone) to which they are attached, which in turn commence relaxing. This process continues until the stress associated with the inner-most parts of the polymer is completely relaxed. These hierachical models are able to predict the linear viscoelastic response of arbitrary mixtures of polymers with arbitrary architectures in melts and have become increasingly accurate [17, 104, 136, 141144, 146, 154157], although numerous shortcomings exist and continue to be exposed [158161]. The objective of these hierachical models is the prediction of complex modulus, , from structural data obtained using light-scattering, SEC, and NMR, using a small set of parameters that are determined from the chemistry of the polymer, and the temperature at which the measurement is carried out. Typically these include the density, equilibration time , entanglement molecular weight , and sometimes the plateau modulus, [162]. The same parameter values are used for different architectures with the same underlying chemistry.

3.3. State of the Art

Enormous progress has been made in modeling the linear and nonlinear rheology of LPs using the tube model. Likhtman and McLeish [142] rigorously integrated all known processes of relaxation in the linear regime (reptation, contour length fluctuations, constraint release, longitudinal stress relaxation along the tube) simultaneously. Similarly, Graham et al. refined the original Doi and Edwards model to predict the rheological response of linear chains over the entire range of deformation rates from the linear to strongly nonlinear [155]. These include the additional effects of chain stretch and convective constraint release that are not present in the linear regime.

For BPs, the hierarchical models described above represent the state of the art in the linear viscoelastic regime. Progress in the nonlinear rheology of BPs has been less forthcoming. McLeish and Larson [53] developed a molecular constitutive model for a BP with arbitrary number of arms, but only two branch points. This “pom-pom model” was able to qualitatively describe strain hardening in extensional flows and strain softening in shear flows for the first time. Subsequent modifications have introduced drag-strain coupling [163], corrections for reversing flows [164], and zero second normal stress coefficient [165]. Inkson et al. [166] extended the pom-pom model to a multimode model, phenomenologically, and found that it described the nonlinear rheology of commercial LDPE quite well.

3.4. Computational Models

In addition to the tube model, there are a number of other simulation methods, such as slip-link [167178], lattice [179191], and molecular dynamics [192205] models, to model polymer dynamics. These methods are more “accurate” since they avoid the central simplification of the mean-field tube theory, namely, the phenomenological approximation of the effects of a complex multi-chain environment into a mean-field “tube.” While this results in an analytically tractable single-body problem, it buries assumptions that have to be made about multibody interactions such as the phenomenon of constraint release, the coupling of entanglements in a melt, and the form of the mean-field potential. Without doubt these assumptions sometimes lead to cancellation of errors, but not always, and certainly not in any predictable fashion.

It should be pointed out that increased accuracy comes at the price of increased computational cost. A significant problem in modeling polymers is the wide separation of dynamical timescales, ranging from the order of picoseconds ( s) for local bond vibrations, to the order of seconds or hours ( s) for phenomena of greatest relevance to rheology. A possible solution is coarse-graining, where fast dynamics are averaged out, and only slow modes of motion are explicitly accounted for. Thus, slip link models are coarse-grained at a length scale of the order ~10 nanometers, and a time scale of the order 10–100 nanoseconds. Similarly lattice and molecular dynamics models are coarse-grained at the level of an effective monomer, which roughly corresponds to a length scale of about a nanometer. Despite the increase in computational power in recent years and the possibility of coarse-graining, these models are not yet fast enough to be useful for analytical rheology.

4. Methods and Progress

In this section, I will review developments in analytical rheology of LPs and BPs. Advances have been more forthcoming for polydisperse mixtures of LPs for three reasons, (i) there is a plethora of well-characterized experimental data [60, 206212], (ii) rheological models have been built, tested, refined, and calibrated extensively [213], and (iii) it is possible to use approximate “double reptation-” type models [135, 214], in lieu of the full tube model [142], which makes the mathematical model and its numerical solution more expedient.

Numerous attempts have been made at analytical rheology of BPs, especially for diagnosing branching content with varying degrees of success. The primary impediments are that (i) synthesizing well-defined and well-characterized BPs is highly nontrivial [7, 215221], (ii) the process of testing and calibrating the rheological models exposes many weaknesses of the tube model that are not manifested for LPs [158, 159], and (iii) there is no counterpart to the “double reptation” model because of the complexity of the computational models. Even so, the past few years have seen tremendous progress.

4.1. Linear Polymers

A particularly influential idea in the development of analytical rheology for LPs was “double reptation,” which offered a mathematically convenient approximation with empirical support [54, 222], to a more complete but complex theory [135, 142, 214]. It postulated that the relaxation modulus was related to the molecular weight distribution via the convolution: where is the entanglement molecular weight, the kernel is the relaxation function of a monodisperse LP, and is the plateau modulus. If is the weight fraction of the polymers with molecular weight less than , then the molecular weight distribution is defined as . is the mixing exponent which is 1 for single reptation in the original “reptation in a fixed tube” model [104], and 2 in the original double reptation model. The double reptation model originates from the idea of entanglements as binary contacts [62], which, by itself, is still an unsettled issue [189, 223228]. The restriction on the value of can be relaxed on numerical and physical grounds, allowing any value of . Depending upon how the experimental data is parsed, values of ranging from slightly below 2 to nearly 4 have been used [229231].

A number of plausible forms, ranging from purely theoretical to completely empirical, have been proposed for the kernel . The more complex empirically inspired kernels usually introduce additional model/material dependent parameters. Some popular choices, roughly sorted in the order of increasing complexity, include the following.(i) The Tuminello or step function kernel [133]: where is the terminal relaxation time, which is often set to , where , and is a function of temperature that encodes time-temperature superposition using, for example, Williams-Landel-Ferry or WLF parameters [232].(ii)Single Exponential Form [135]: It smoothens out the discontinuity in the Tuminello kernel at .(iii) The Doi-Edwards or reptation kernel [104]: It is derived from the original reptation model without contour-length fluctuations or constraint release.(iv)The BSW (Baumgaertel, Schausberger, and Winter) kernel [233]: where is a material parameter of the order of 0.5, which depends on and .(v)The time-dependent diffusion de Cloizeaux model [134]: with where , and is a parameter.

Typically, the experimental is available at a set of discrete experimental frequencies as , where . The idea is to minimize the function, where the subscript “” refers to the theoretical dynamic moduli corresponding to the relaxation modulus in (2).

We can translate between and either by using Schwarzl approximations [231, 234] or by extracting the underlying relaxation spectrum [62, 229, 235, 236]. The relationships between , , and are given by The extraction of from is a celebrated inverse problem, with its own chequered past [18, 237254].

Once a choice of the kernel and the method from translating between and are made, the process of inferring from (1) essentially involves “solving” (2). This is also an inverse problem and numerous attempts have been made at figuring out optimal inversion strategies. Broadly, three categories of strategies can be identified.

(i) Regularization
Extraction of in (10) and in (2) involves inverting Fredholm integrals of the first kind. Regularization is a well-known strategy of addressing the ill-posedness of such problems [255]. In Tikhonov regularization [256], a smoothness criterion is applied on by modifying in (9), to address the multiplicity of consistent with a measured due to the inherent susceptibility of the inverse problem to perturbations in the data [229, 230, 257259]. More precisely, the numerical solution of (2) involves the discretization of the function into the arrays , with , and the time variable as as
This equation can be thought of as a linear regression problem with , , and equal to the left-hand side of the above equation. In the classical linear least squares method, the residual is minimized by solving the normal equations . However, if the matrix is ill-conditioned, as is often the case in these problems, Tikhonov regularization adds a regularization term to prefer solutions with desirable properties and seeks a solution that minimizes , where is the Tikhonov matrix. The normal equations are modified to [256]. If the matrix is set to , where is the Dirac delta function, then the regularization prefers solutions with small norms. It can be tailored to prefer solutions with small second derivatives (or smoother ) by setting proportional to . The strength of the smoothness criterion can be controlled, and it is found that the inferred molecular weight distribution is somewhat sensitive to the strength of regularization [257]. Using similar methods, Maier et al. [229] extracted the kernel function from empirical data and found that it did not match any of the standard kernels satisfactorily.

(ii) Parameterization
In this strategy, a parametric form is assumed for , often by using prior knowledge of the polymerization process. For example, log-normal is typical of Ziegler-Natta or addition polymers, while Schultz distributions are typical of condensation or metallocene polymers, Flory [16]. If the sample is suspected to be monomodal, then a generalized form such as three parameter generalized exponential (GEX) can be used [231, 260], which can capture the features of Schultz or log-normal distributions at the cost of an extra free parameter. where , , and are the three parameters. For more complex multimodal , this form can be extended to include a sum of generalized exponential models [231, 261, 262].
Pattamaprom and Larson used the dual-constraint model to modify the kernel and parametrically deduced the of multimodal and polydisperse LPs [263]. The dual constraint model can be used to predict the linear viscoelasticity of linears and stars and hence offers the possibility of treating BPs in a similar manner [264].

(iii) Stabilization
This strategy differs from the two above in significant ways. The philosophical difference stems from the fact that in practice we often do not seek the complete distribution , but only its principal moments [62, 235, 236, 265]. For example, we can define the moment of as and realize that the usual number-averaged molecular weight, , and the weight-averaged molecular weight, , and so forth. In the same spirit, one can consider moments of as and realize that the zero-shear viscosity , the recoverable compliance , and so forth. In general, and need not be integers.
Using the idea of Mellin transforms [62, 266, 267], Mead established a connection between the moments , which tell us something about the molecular weight distribution, and the moments , which tell us something about the rheology. In particular, for a monomodal with the single exponential form for the mixing rule or kernel and , it can be shown that [62, 235]: where is the standard gamma function. Thus, computing a particular moment of involves extracting from the experimental data, taking the appropriate moment, and using (16). For more complicated kernels and multimodal , it is usually not possible to write analytical expressions similar to (16), and corresponding integrals may have to be solved numerically. Computing moments of a function or distribution involves integrals which are inherently stable and error-smoothing. The latter allows this technique to naturally filter noise in the experimental data [268]. In addition, since the aim is not necessarily to characterize the entire , the calculation is efficient.

It should be pointed out that there are other methods that cannot be easily classified into one of the three categories above. For example, Wolpert and Ickstadt employed a nonparametric Bayesian inference scheme using Levy random fields as priors and were able to accurately reflect the uncertainty inherent in the inversion process [269]. Similarly, Guzmán et al. [270] described the molecular weight distribution analytically (using GEX functions) assuming (double reptation) and avoided the explicit numerical inversion of (2). Other attractive features of their method included an estimate of the uncertainty and a relaxation of the need to specify some material parameters required for other methods.

4.1.1. Frequency Window

The principle of time-temperature superposition allows us to perform rheological measurements at different temperatures and superpose the data to create a single master plot. It is thus possible to measure linear viscoelastic spectra that span 10 decades of frequency or more and observe the high frequency Rouse modes, the plateau region, and the terminal region, where and .

When sufficient experimental data is available in the terminal regime, the reconstruction of the molecular weight distribution is greatly assisted. While the general maxim “more is good” holds, the amount of low-frequency data required for accurate inversion seems to vary between one to three orders of magnitude below the cross-over frequency depending on the polydispersity and chemistry of the sample [231, 261].

A different set of issues are present on the high-frequency end of the spectrum. Due to the so-called Bueche-Flory theorem [271], the dynamic mechanical response of chemically identical polymers in this regime is independent of the molecular weight or architecture. The “double reptation” equation (2) does not contain the contribution of Rouse modes, as the lower limit of the integral is and not zero. Thus, the Rouse contribution has to be either (i) subtracted from the experimental data as done heuristically, for example, by Thimm et al. and Léonardiet al. [230, 272, 273], or (ii) added to the from the double reptation model (2) obviating the need for processing experimental data as done by van Ruymbeke et al. [231]. Indeed, the latter course of action may be more attractive since van Ruymbeke et al. found that including the intermediate regime of frequencies where the Rouse modes overlap with entanglement modes provides valuable information.

4.1.2. Calibration

How do we know whether the inferred from rheology is acceptable? In particular, where do we stand if the obtained from SEC and rheology are different from each other. Practically, this situation is commonly encountered and knowing how to deal with it is important in scoring different candidates (algorithms) for analytical rheology of LPs and in establishing how much to trust a certain inferred . Unfortunately, the answer to this question is not particularly clear. Consider the case of samples with small amounts of high molecular weight fractions. These fractions may pass undetected by SEC, while their rheological signals may be reflected quite clearly in the inferred . Thus, we expect differences between the two molecular weight distributions when the sample contains high molecular weight tails. Otherwise, if the two molecular weight distributions lie within the confidence interval of the SEC, which is about 2–5% standard relative error on the weight-averaged molecular weight [231], then they should be considered indistinguishable. Better still, it is possible to perform uncertainty analysis on the obtained from inverse rheology to judge whether the predictions are statistically different.

A more subtle but related question is “how independent of SEC is the inferred from rheology?” It may be recalled that the formulation of the inverse problem using double reptation via (2) is not parameter free. Typically these parameters are regressed by fitting the forward problem to experimental linear viscoelasticity. Unfortunately, the for this calibration sample has to be known a priori and is usually obtained by SEC. Thus, the calibration of the parameters of the inverse rheology problem is tainted by SEC (or any alternative technique used to measure ). While some of these effects may be mitigated by using a highly monodisperse sample, as van Ruymbeke et al. put it [231], “SEC and rheology are best viewed as complementary methods, with the most information to be gained from critical comparisons and consistency checks between the two.”

4.2. Branched Polymers

The use of rheology as an indirect probe of LCB has a rich tradition. Teasing apart the influence of the molecular weight and molecular weight distribution from branching is highly nontrivial, since they often have similar manifestations [274, 275]. There is a fair amount of diversity in the measurement and analysis of rheology, and the resulting level of branching information provided. Some purely empirical tests measure a single average property such as or and tell us whether LCB is present or not [42, 276, 277]. On the other end of the spectrum, many model-driven approaches take into account the entire , and other available information, to quantitatively figure out the amount of LCB present.

4.2.1. Experimental Methods

We first consider some purely experimental techniques. It should be pointed out that more successful recent techniques combine measurements from multiple detectors such as SEC viscometry, light scattering and melt rheology [66, 278282], as discussed later.

Linear Viscoelastic Probes
The use of simple indices, which determine the enhancement of the viscosity due to LCB above the curve, for LPs with the same chemistry, is a popular way of quantifying branching. These techniques include the viscosity index of long-chain branching (v.b.i) [283], Dow rheology index (DRI) [284], and long-chain branching index (LCBI) [278]. Sometimes, the intrinsic viscosity is used as a proxy for the average molecular weight [91, 278], since , unlike , is relatively insensitive to small amounts of branching. For example, LCBI is used to quantify branching in polyethylenes and is defined as where is measured in pascals at 190°C and is measured in dL/g at 135°C in trichlorobenzene. Unlike DRI, LCBI is relatively independent of molecular weight and molecular weight distribution. By construction, LCBI is equal to zero for pure LPs. Michel introduced an empirically motivated index specifically designed for branched isotactic polypropylene by statistically analyzing a large number of experimental samples [285]. Similarly, Tsenogolou and Gotsis [286] and Tian et al. [49] developed semiempirical techniques to quantify LCB in polypropylenes modified by different melt modifying agents.
In similar vein, the steady-state recoverable elastic compliance, is also correlated with LCB; unfortunately, it is also correlated in the same manner with molecular weight distribution which makes it hard to tease the apart the two effects [287290]. In addition, the measurement of is more time-consuming than that of , and in many practical cases steady state may not even be experimentally attainable [277, 291].
Besides steady-state properties like and , which rely on a single number, numerous studies consider the characteristics of different viscometric functions. Thus, instead of comparing just the or of linear and branched counterparts, they consider the difference in magnitude and form of entire functions. A popular form involves plotting the modulus of the complex viscosity against the frequency [292294]. This is related to the Vinogradov plot which plots v/s [295]. The presence of LCB is characterized by “wiggles” or inflection points in the v/s plot, which may be discerned more clearly by plotting the slope.
Cole-Cole plot of v/s , and van Gurp-Palmen plot of loss angle v/s the complex modulus are other alternatives that are very sensitive to changes in relaxation processes [88, 296301]. For example, Robertson et al. [302] used the van Gurp-Palmen plot and correlated the peak in at intermediate frequencies with LCB by appealing to blending rules. Trinkle et al. [297, 300] found that using a reduced van Gurp-Palmen plot by plotting against allows one to study the effects of molecular weight and molecular weight distribution in LPs systematically. The presence of multiple “bumps” on these plots indicates additional relaxation modes in the form of bimodal molecular weight distribution, or, when that can be ruled out, the presence of LCB. Recent studies seem to indicate the superiority of over for LCB detection [303].

Thermorheological Probes
In the linear viscoelastic-based methods above, measurements are effectively carried out at a constant temperature (ignoring, for the moment, the use of time-temperature superposition to create master plots). We can additionally exploit the fact that the rheology of LPs and BPs has distinct thermal signatures. For example, the flow activation energy can be measured by considering the temperature dependence of dynamic viscosity. It is found to be a sensitive probe of branching [32, 291, 303]. Unfortunately, the effect does not distinguish between SCB and LCB [32, 304]. When this disadvantage is combined with the modest magnitude of change in the activation energy from LCB and thermorheological complexity that is common in many branched polyethylenes, it makes for a poor probe [32, 291, 303306]. However, some researchers have been able to use thermorheological complexity, or the breakdown of time-temperature superposition, to indicate the presence of LCB [291].

Nonlinear Rheology
The bulk of literature on analytical rheology relies on linear viscoelasticity, because experiments are more convenient to perform and less prone to experimental error like wall-slip and shear-banding. However, quite a few studies have considered the difference in the nonlinear shear or extensional response of LPs and BPs [91, 307314] as a probe of LCB.
In extensional rheology, BPs show the onset of strain hardening at deformation rates far below the inverse Rouse time. This can be used to detect the presence of LCB. It should be pointed out that molecules with no “inner-segments” (segments capped on both ends by branch-points) like stars do not show this effect [44, 104, 313]. Thus, it may not be particularly suitable for sparsely branched mPE, for example, which mostly contains linears and stars.
The damping function in simple shear offers the possibility of being a good probe for LCB, since it is relatively insensitive to molecular weight distribution typically seen in industry [310]. Stadler and Münstedt [88, 315] sought to use size-exclusion chromatography with multiangle laser light scattering (SEC-MALLS), and shear rheological measurements to analyze viscosity functions of LCB polyethylenes, by separating the influence of long-chain branching and molecular weight on rheology. Doerpinghaus and Baird [316] parameterized the pom-pom model, which describes the nonlinear rheology of BPs semiquantitatively [53, 163, 165, 166, 317320], to shear and extensional rheology for different levels of LCB.
A potentially systematic technique for studying the nonlinear response is Fourier transform rheology, or FT-rheology [314, 321326]. This method is similar to the typical linear viscoelastic frequency sweep, except that, instead of small-amplitude oscillatory shear, a large-amplitude oscillatory shear (LAOS) deformation at a particular , with large , is applied. This results in the breakdown of Boltzmann superposition principle, and the resulting torque or stress is not a simple harmonic function, and instead contains higher harmonics, Indeed, by performing Fourier analysis on the stress response, higher odd harmonics, such as and , are found to be nonzero.
An analysis of these harmonics, via the ratios or , is found to be a sensitive nonlinear probe of molecular weight distribution and branching [314]. In fact, Vittorias et al. [327] employed a combination of FT rheology [314] and the pom-pom model [53, 166] to determine the topology of BPs.

Multidetector Methods
The rheological signature of polymers with broad molecular weight distribution often mimics that of LCB. For example, , which is more sensitive to large molecular weight than , can be amplified to a greater degree by high molecular weight components, than by LCB. Similarly, strain hardening at small strain rates due to high molecular weight components can completely mask the contribution of LCB [328]. Thus, an independent measurement of the molecular weight distribution, using SEC or SEC-MALLS, can help to separate the two effects. Thus, it has now become quite common to complement rheology, with a suite of independent tests to probe LCB.
Janzen and Colby [66] used the phenomenological model developed by Lusignan et al. [329, 330] which related to the average molecular weight and the spacing between branches, in addition to three other chemistry-specific parameters. They were careful to apply an explicit correction to the molecular weight inferred from SEC due to branching-induced changes to the hydrodynamic volume. Given SEC data and , they could then invert the phenomenological model to estimate the spacing between long-chain branches.
Wood-Adams and Dealy [42] used triple-detector SEC, which allowed them to measure the molecular weight and intrinsic viscosity of fractions emerging from the SEC column. They combined these measurements with linear viscoelasticity. They used an empirical expression, which was later linked to molecular theory [331], to compute the difference between molecular weight distribution curves obtained via chromatography and those inferred from complex viscosity [332], and linked the difference to LCB frequency.
van Ruymbeke et al. [63] used the molecular weight distribution measured by SEC with their time-dependent diffusion reptation model [213] for LPs to obtain a predicted linear viscoelastic response. They developed a criterion to quantify the level of LCB by analyzing the difference between the predicted and measured viscoelastic response. They were able to detect LCB in the 1/1000 backbone C-atoms range which is below the threshold of NMR for high-density polyethylenes synthesized by Zeigler Natta, Phillips, and metallocene catalysts.
Crosby et al. [333] suggested a dilution rheology method, where the concentration-dependent variation of , in conjunction with molecular theory and rudimentary molecular weight information, was able to quantify the LCB in different families of branched polyethylenes. Krause et al. [334] used electron beam irradiation to introduce branching in isotactic polypropylene and studied it using SEC and linear/nonlinear rheological experiments. They found rheology to be more sensitive to branching, as expected, but reported that a combination of both the methods resulted in better overall characterization. Some of the works previously referenced also employ multidetector methods [88, 219, 220, 302, 315].
In an excellent recent review on diagnosing low levels of branching in metallocene-catalyzed polyethylenes, Stadler [335] evaluated the effectiveness of many different rheological probes at low (30 K–50 K), medium (50 K–120 K), high (120 K–250 K), and ultrahigh (≥250 K) molecular weights. He concluded by advocating multidetector techniques by noting “as small amounts of long-chain branches can have a very similar effect as a high-molecular tail, it is not advisable to use rheological means alone to investigate long-chain branching. It should always be confirmed by SEC (or better by SEC-MALLS) that no such high molecular tails are present.” Likewise, in attempting to characterize branched polyacrylates, Castignolles et al. [336] opined that the application of 13C NMR is more complex and suggested combining spectroscopic, chromatographic, and rheological information to fully characterize molecular information.

4.2.2. Model-Driven Data-Analysis Methods

The trend from measuring a single property to diagnose LCB towards processing of multiple bits of information in multidetector techniques is further extended by model-driven methods. Some of the more sophisticated model-driven techniques allow us to combine heterogeneous data like rheology and molecular weight distribution obtained from multiple detectors with molecular models by exploiting some a priori information about the sample. I have already mentioned several methods which explicitly employ models (whether molecular-based or phenomenological) to interpret experimental data [63, 66, 316, 327, 331, 333]. The rest implicitly use a model, even though it may only be specified as an empirical correlation, and not really labeled as a “model.” Most studies using molecular-based models use predictions of the linear viscoelasticity of LPs to analyze the difference between the predicted and observed rheology. In the case of studies based on nonlinear rheology, predictions of the pom-pom model are used in some approximate manner, since it should be reiterated that the pom-pom model is not a general model for the prediction of the nonlinear rheological response of arbitrary mixtures of BPs. Despite the availability of hierarchical models for the prediction of the linear viscoelasticity of mixtures of BPs, they have seen limited use in analytical rheology.

Perhaps, the principal reason for this is the inability to construct an approximation to the full hierarchical model. That is, there is no counterpart to the idea of double reptation, which allows us to approximately capture the linear viscoelastic behavior of BPs and state the inverse problem as a single equation like (2). However, in the 20 years since the first systematic applications of analytical rheology for LPs, computers have become significantly powerful, and perhaps the need for an approximate model is overstated. Even if we neglect computational cost, there are methodological challenges that have to be surmounted to cast the inverse problem in a systematic form. Thus, in the following, we will focus primarily on recently introduced techniques for the inversion of the full hierarchical models for BPs.

As mentioned earlier, model-driven methods work best when additional information on the unknown sample is available. Information about the sample may be available in the form of a good understanding of the underlying polymerization kinetics, or through knowledge of blending processes (if two separately synthesized samples have been mixed). For example, the polymerization mechanism of single-site metallocene-polyethylenes, which are commercially important, is relatively well understood. It can be characterized surprisingly well by only two parameters, the average molecular weight and the level of branching, even though the actual sample contains a mixture linear, star, H-shaped, comb, and hyperbranched molecules [337, 338]. Algorithms for the generation of molecules of different shapes and sizes for various kinds of polymerizations have been developed by Tobita and coworkers [339344], which can be helpful to reduce the “dimensionality” of the inverse problem by allowing the description of a complex ensemble of molecules using only a few parameters.

If the sample has been prepared by blending two samples of different types, it is useful to know the blending ratio in addition to the polymerization mechanisms by which the two initial samples were created, so that the ensemble of molecules can potentially be constructed [345]. Once we are able to describe the molecules parametrically (using a few parameters), we can apply molecular models based on the tube theory.

Inversion of Hierarchical Models
Recently, a Bayesian formulation of the problem of analytical rheology of BP was reported [346348], which imposed a systematic model-driven computational framework, without sacrificing the flexibility of integrating heterogeneous pieces of information. Compared to the simple indices mentioned earlier, this technique is mathematically more complex, computationally far more demanding, and, because of its probabilistic outlook, sometimes harder to interpret.
The central insight of this technique is a movement away from the most common method of posing inverse problems in rheology, namely, by asking the question [347] “what structure and composition minimizes the distance (often measured in a least-squared sense) between experimental data and model predictions?” This conventional approach reduces analytical rheology into an optimization problem, as employed in the analytical rheology of LPs using parameterization or regularization. Of all the issues afflicting inverse problems in general [349, 350], the most serious for the analytical rheology of BPs in particular is the treatment of multiple solutions, or the prevalence of different mixtures which give rise to approximately the same rheology. Problems of “degeneracy” are significantly compounded for BPs when compared with LPs, not only due to the richness of potential mixtures and linear viscoelastic responses, but also due to the additional uncertainty introduced by more sophisticated models and experiments.
Different techniques adopt different strategies for dealing with degeneracy; multidetector methods discriminate between potential solutions by using multiple signals to weed out some structures which may be compatible with the rheology, but incompatible with SEC data, for example. In other techniques, parsimony is imposed from above to protect against overfitting the data: Thus, in the analytical rheology of LPs, parameterization techniques restrict the available degrees of freedom by only admitting solutions that conform with the prescribed parameterization, while regularization methods add a smoothness requirement to fence out the degenerate solutions.
This makes interpretation of the results of optimization-based techniques straightforward, since we get a “best” solution defined in some sense. However, they suffer from several drawbacks, due to their implicit assumption of uniqueness. They are not equipped to address or describe the existence of multiple solutions meaningfully. The Bayesian formulation transforms the inverse problem into a sampling problem, which can then be investigated using a Markov-chain Monte Carlo algorithm. By design, it avoids the trap that optimization-based methods suffer from and explores the distribution of structures and compositions consistent with a certain linear viscoelastic response. When properly implemented, it is capable of describing all possible solutions and assigning them with a degree of likelihood. When applied to mixtures of linears and stars, for example, it was shown [346] that the method was able to (i) identify the number of components in an unknown mixture (whether single component or blend) due to the intrinsic Occam's razor [351353], (ii) describe the composition of the mixtures in the absence of degeneracy, and (iii) describe multiple solutions, when it was not possible to rule them out. Further work [347] confronted the last of these issues, by recognizing that, while identification of degeneracy is important, it is only the first step in its ultimate resolution. It provided an algorithm inspired by the Larson's idea [93] of combinatorial rheology or “performing rheological studies of partially characterized samples with fully characterized samples of known molecular weight and branching” to suggest the best possible experiments to perform to discriminate between the multiple solutions. It should be recognized that degeneracy cannot be resolved without performing additional experiments. More recently, this technique has been successfully applied to commercial metallocene-catalyzed polyethylenes [348], using multiple sources of data (rheology and SEC) to infer the probability distribution of the level of branching.

Inversion of Nonlinear Models
As mentioned several times before, the pom-pom is the only BP for which a reasonable nonlinear rheology model exists. Until the development of constitutive equations for more complex branched architectures, the best we can do is to attempt to associate a particular BP with a “similar” pom-pom polymer. Indeed, the idea of combining the results of a recent study by Read et al. [10] with the aforementioned Bayesian framework is very promising. Read et al. considered LDPE, a polymer of immense commercial importance, which is known to be particularly hard to model because of its highly branched nature. Using a 4-parameter polymerization model for the high pressure free radical synthesis, they fit the parameters using SEC (via -ratio analysis) and linear viscoelastic data (using the BoB model [144]), enormously reducing the dimensionality of the problem. From the ensemble of molecules generated by the polymerization model using the fitted parameters, they mapped the priority (related to the number of free ends connected to an internal segment) and relaxation time distributions to pom-pom modes [163]. The predictions for shear and extensional deformations of the pom-pom model for the mapped structure agreed very well with the experiments. While this study did not directly invert the pom-pom model (using it for validation, instead), it is easy to envision how it could potentially be incorporated into the Bayesian framework.

5. Summary and Perspective

The unusual sensitivity of rheology to molecular structure makes it an attractive candidate as a tool for inferring structure from simple rheological measurements, where the sensitivity is translated into resolution of the analytical tool. This typically requires the inversion of a forward model (as shown in Figure 1), which may be a simple correlation or empirical thumb-rule (e.g., indices to infer LCB), a simplified toy version of a more complicated model (e.g., use of simple double-reptation-type models for inferring the molecular weight distribution of LPs), simplified applications of incomplete models (e.g., use of pom-pom model to map the nonlinear rheology of complex BPs), or a full-blown sophisticated model capable of handing arbitrary mixtures of polymers of arbitrary mixtures (e.g., hierarchical models for linear viscoelasticity such as BoB). In addition to resolution, the extreme sensitivity of rheology also offers modest protection from the imperfection of these forward models (Figure 4). Analytical rheology also offers other important advantages over other analytical techniques. It is easier and cheaper to perform, more sensitive to precisely those components that have the greatest relevance to processing, and circumvents the dissolution step, which offers its own set of challenges.

For mixtures of LPs, the empirical validity of the double-reptation picture enables us to formulate the problem of analytical rheology as an inversion of a Fredholm integral of the first kind as shown in (2). The simplified kernel along with the parameter , which models constraint release, offers a convenient and empirically reasonable approximation to a more complete and complex theory based on the tube model [142]. The task of solving the inverse problem is usually accomplished by regularization, parameterization, or stabilization.

For BPs with LCB, the need for analytical rheology is more pressing, since all the other available techniques including -ratio analysis and spectroscopy are seriously flawed in one way or another. The general problem of inferring the structure of an arbitrary mixture of polymers of arbitrary architectures is far more complicated than the analytical rheology of LPs and, in this very general sense, remains unsolved and perhaps potentially unsolvable. That does not necessarily imply that meaningful information cannot be extracted for important subclasses of the general problem. Indeed the analytical rheology of LPs is a subset of the BPs enunciated in this general form. If the chemistry of the polymer is specified (e.g., isotactic polypropylene), there exist methods to diagnose and quantity the extent of LCB, using empirically inspired forward models. There is a general consensus that aggregation of multiple sources of data such as rheology and SEC-MALLS is better than either of the sources alone, although there is no clear consensus on manner in which the aggregation should best be carried out and interpreted. If the polymerization kinetics of the polymer sample are well established, then molecular model-driven methods can be employed to infer the branching information using recent Bayesian data-analysis methods, which currently offer the most sophisticated treatment of degeneracy. In addition to linear rheology, the force of nonlinear rheology may also be brought to bear upon the problem—perhaps through a systematic technique like FT-rheology. While there is currently no general theory capable of modeling the nonlinear rheology of arbitrary BPs, it is an area of intense research. Additionally, mapping complicated BPs to the pom-pom model has been modestly successful and can be used as a stop-gap measure until more progress is made in modeling nonlinear rheology.

In closing, it is useful to emphasize once again what analytical rheology can do well and what it cannot. Rheology is a very sensitive probe of certain features of polymer architecture like molecular weight and LCB. These features are extremely significant for many commercially important synthetic polymers (primarily polyolefins) and that is the niche that analytical rheology occupies. But the potential heterogeneities, even while confining ourselves to synthetic polymers, extend far beyond that. Consider the example of poly(vinyl butyral) [19] which is used as an interlayer in laminated safety glass. It shows polydispersity in molar mass, chemical composition (it is a copolymer), SCB, and LCB. Sometimes it also exhibits crosslinking and graft-induced branching. In addition, the presence of intra- and intermolecular hydrogen bonding compounds dissolution, and, unlike well-studied polymers such as polyethylene, it lacks well-characterized narrow molecular weight linear standards. Clearly, even with perfect models and techniques for direct and inverse rheology, one cannot imagine fully characterizing such a polymer, using rheological techniques alone.

Acknowledgment

Support from the National Science Foundation under NSF DMR 0953002 is gratefully acknowledged.