Abstract

Presently, there exists an important need for lighter and more resistant structures, with reduced manufacturing costs. Laminated polymers are materials which respond to these new demands. Main difficulties of the design process of a composite laminate include the necessity to design both the geometry of the element and the material configuration itself and, therefore, the possibilities of creating composite materials are almost unlimited. Many techniques, ranging from linear programming or finite elements to computational intelligence, have been used to solve this type of problems. The aim of this work is to show that more effective and dynamic methods to solve this type of problems are obtained by using certain techniques based on systematic exploitation of knowledge of the problem, together with the combination of metaheuristics based on population as well as on local search. With this objective, a memetic algorithm has been designed and compared with the main heuristics used in the design of laminated polymers in different scenarios. All solutions obtained have been validated by the ANSYS® software package.

1. Introduction

A composite material is formed by the aggregation of two or more distinct materials to form a new one with enhanced properties: an agglomerate material known as the matrix and reinforcement materials that may be made up of continuous fibers, short fibers, or particles [1, 2]. Well-designed material adopts the best properties of its constituents and even some that none of these possess. The aim of composite materials design is to generate new cheaper materials with improved strength and lightness. Not all of them can be improved simultaneously; therefore, the design objective is to obtain a new material that offers the best possible adaptation to the required specifications.

The exceptional strength and lightness of these materials have led to the development of a vast number of applications, particularly in the aeronautical and space industries due to the economic significance of these properties. Figure 1 shows the markets for composite materials.

Laminate is a particularly important type of composite material made up of laminas of the same composition with unidirectional reinforcement fibers, stacked, and bound together by the same material that forms the matrix, but with distinct fiber orientation. Within this category, symmetric laminates are worthy of special attention, with both geometrical and structural symmetry relative to the mid-plane.

One of the main difficulties of the design process of a laminate is to design both the geometry of the element and the material configuration itself as to best exploit the qualities of the constituent materials. Designing process must also evaluate the deterioration of the laminate properties over time due to stress, which can lead to unanticipated behaviour (cracks) or failure of the structural element in question.

Synthesis and analysis have been traditionally carried out using empirical knowledge based methods [3]. This is partly because the number of possible combinations of composites is almost unlimited and also because characterization by experimentation is very expensive.

Since the 1990s, different design systems which aim to overcome these limitations have been proposed. These proposals have involved approaches ranging from traditional techniques such as classical nonlinear optimization procedures combined with finite element modelling [4, 5], through generic task methods and case-based reasoning [6], to modern artificial intelligence techniques [716].

Memetic Computing or Memetic Algorithms (MA) have proven to be efficient at numerous situations [1721]. Figure 2 shows the scheme for a typical MA [22]. Local search is performed between each generation, in addition to the techniques used by Genetic Algorithms to explore the search space, namely, recombination/crossover and mutation. It is performed to improve the fitness of the population (in a localized region of the solution space) so that the next generation has “better” genes from its parents.

The aim of this work is to show that more effective and dynamic methods to solve problems of laminated polymers design are obtained by using certain techniques based on systematic exploitation of knowledge of the problem, together with the combination of metaheuristics based on population as well as on local search. With this objective, a MA has been designed and thoroughly compared with the main heuristics used in the design of laminated polymers in different scenarios. This work has been organized as follows: in Section 2, a studio of the main metaheuristics which have been used in the design of composite laminates is performed; in Section 3, a model for the design and optimization of symmetric laminates based on a MA is presented; in Section 4, this model is subjected to an exhaustive analysis and is compared with other methods to prove its usefulness; in Section 5, the MA-based model is applied to specific cases. Finally, conclusions are presented in Section 6.

2. State of the Art of Composite Materials Design

2.1. Genetic Algorithms

Genetic Algorithms (GA) have been the most popular method in the design of laminated polymers and optimizing piling sequence [23]. Callahan and Weeks [24], Nagendra et al. [25], Le Riche and Haftka [26], and Ball et al. [27] have been the first to adopt and use GA for the design of piling sequences of composite laminated materials. GA has also been used in problems with different objective functions, such as strength [28, 29], buckling loads [9, 28, 3034], dimensional stability [35], strain energy absorption [36], weight (either as a restriction or as an objective to minimize) [37, 38], bending/torsion connection, stiffness [36, 39], basic frequencies [34, 4042], distortion [43], or finding laminate reference parameters [44].

GA have also been applied in the design of a variety of composite structures ranging from simple rectangular plates to complex geometrical sheets, such as sandwich panels [45], rigid sheets [46], bolt joints [47], and laminated cylindrical panels [34]. Similarly, GA have been combined with finite elements packages which analyse the response to tension and deformation of the composite material structure [43, 48, 49]. A combination of methods is sometimes used; for example, Park et al. [50] uses an approximate memory combined with a permutation operator and with the local learning/random mixture, in order to reduce function number and improve rate of convergence.

Some of the main problems regarding GA are deep computational necessity and premature convergence. In order to solve these drawbacks, increase the rate of convergence, reduce risk of premature convergence, and lessen function evaluation time, several modifications have been proposed, including the use of parallel computation [51, 52]; optimization of several levels (thick and thin level codification) [51]; introduction of problem-dependable operators [26]; layer adding or layer deleting, permutation, and interlaminar change [9, 46]; generalized elitist and mutation of thickness of laminate/material/angle of fiber [49]; recovery of previously evaluated solutions [10, 50]; using approximation methods for the evaluation function [33, 53, 54]; training artificial neural network [34, 42]; initial blood-related population; or an aging hierarchic structure [55].

Sargent et al. [56] compared GA with some raging algorithms (i.e., random search, raging search, and simulated annealing) and observed that GA obtained better solutions than raging search, but in some cases they were unable to find a solution. Sivakumar et al. [57] compared David, Fletcher, and Powell (DFP)’s Quasi-Newton Method and GA, applied to reduce weight of a laminated composite limited by its basic frequencies. It was concluded that DFP converged in a reduced number of iterations when restriction number was small. However, finding a feasible number was difficult when restrictions number increased. Also considering that DFP could not handle discreet variables, it was concluded that “GA was a better tool to optimize laminated composites.”

Even though GA has been widely used in optimization of piling sequence, an important flaw is its low rate of convergence. GA is an evolutionary algorithm based on population and might require several generations before converging into a solution [58]. Each generation consists of a great number of function evaluations; therefore, it could require plenty of computational time besides being very expensive.

2.2. Simulated Annealing

After Genetic Algorithms, simulated annealing (SA) is the second most popular method for the optimization of piling sequence in laminated composite materials [5962].

The main problem of this technique is the generation of a sequence of points that converges into a nonoptimal solution. In order to solve this lack, some modifications have been proposed, such as an increase in the probability of sample points far from the present point [63] or the use of a set of points at a time instead of only one [64]. In order to increase the rate of convergence, Genovese et al. [65] proposed a two-level algorithm, including a “global annealing” in which all the design variables were disrupted simultaneously and a “local annealing” in which only one design variable was disrupted at a time. The local annealing was performed after each iteration of the global annealing in an attempt to improve the testing point locally. It was found that its speed of convergence was greater than the one-level simulated annealing and comparable with the optimization method based on gradient implemented by the Sequential Linear Programming (SLP) method.

SA is a good choice for the general case of optimal selection of laminate; however, it cannot be programmed to take benefit of the advantages of specific properties of a certain problem. GA is, in this respect, more flexible, but it frequently takes more computational time [56, 61, 66]. It is not easy to generalize the previous conclusion, since there are other researches, such as Rama Mohan Rao and Shyju [58], which shows that SA had better computational efficiency and was better at finding a solution to other combinatorial problems.

2.3. Tabu Search

Tabu Search (TS) was implemented by Pai et al. [14] for the optimization of piling sequence of laminates subjected to bulging and resistance requests as well as for matrix rupture. Results were compared with GA and they showed a comparative solution, but computational time depended on the case.

It has been used combined with other techniques, such as the SA. Rama Mohan Rao and Arvind [67] added a TS in the SA obtaining a method called tabu embedded in simulated annealing (TSA). Optimization of piling sequence of laminated composites was solved by the TSA, whilst restrictions are administrated through a correction strategy. TSA was faster than classic SA although more memory is required.

2.4. Variable Neighbourhood Search

Variable neighbourhood search (VNS) has been recently applied to laminated polymers design. Corz et al. [68] propose and compare an algorithm based on a variable environments search that allows the design of the geometry and composition of symmetric laminated composite materials. In this work, the implementation of the algorithm is shown in detail: objective function, encoding, fitness, neighbourhood structures, and local search, as well as the data used in the examples: volume fraction, efforts, and coefficients. The proposed model is compared with other techniques such as GA, SA, and TS, showing more efficient results.

3. A Memetic Algorithm-Based Model for the Designing of Composite Materials

In this section, the optimization problem, an encoding scheme for the representation of the problem, a fitness function to evaluate the feasibility of solutions, and the reproductive and local search operators used are presented.

3.1. Optimization Problem

The optimization problem can be formulated as follows: find the material (fiber and matrix ), the volume fraction, , and the laminate stacking sequence, , with the purpose of maximizing the utilization of the laminate, thus achieving the lowest number of laminas . The set of design variables is expressed as the vector . The degree of utilization of the laminate material is defined as a positive real number that is obtained from the fitness function (), defined in the Section 3.2, to the vector. Therefore, the optimization problem and its corresponding constrains can be defined as follows:where (number of laminas); in 0.1 intervals (volume fraction); in 10° intervals (orientation of the laminas).

3.2. Encoding

The structure of the laminate will be represented by an agent formed by four information units (IU), which encode the fiber, the matrix, the volume fraction, and the laminate geometry (see Figure 3). Fitness, that shows how well adapted the solution is to its environment, is added to the structure.

Each of the four IU that define the laminate represents a different characteristic of its composition: fiber and matrix are represented by an integer number; volume fraction is represented by a real number; and geometry is represented by a sequence of integers that represent the direction of fibers. The values used for the different IU are shown in Table 1.

To represent the geometry of the laminate, we use the following notation: [50/−10/0/60/−20. It is a symmetric laminate whose outer (first) lamina has an orientation of 50°, the second one (next) of −10°, and so on. A simplified representation of a symmetric laminate is shown in Figure 4.

Different failure criteria can be used to determine whether a lamina can withstand specific stress conditions without breaking [1]. In this article, the Tsai-Wu failure criteria [69] have been selected. In order to determine if a lamina breaks, for each lamina of a laminate, its Tsai-Wu coefficient is defined aswhere , and are, respectively, the ultimate tensile and compressive strength in the fiber direction, the ultimate tensile and compressive transversal strength, and the shear strength. and are the stress resistance coefficients for the - and -axes, respectively, and is the stress coefficient of an orthotropic lamina under plane stress conditions. Applying the Tsai-Wu failure criteria [69], the lamina does not break if , , and breaks otherwise. Furthermore, if , the lamina is working at full capacity. If a lamina is broken, then the entire laminate is discarded.

3.3. Fitness

The fitness function considers economic and safety criteria, so the alignment of stresses along the direction of the fibers is improved and the following are penalized: volume fraction and high laminate thickness; stacking more than four consecutive laminas with the same fiber orientation; the distribution of stress in directions other than that of the fibers; and lamina breakage. A higher value indicates a better solution. The fitness function, , is defined aswhere the coefficients are described in Table 2. The multiplicative factor is to maintain fitness values between manageable values. has been empirically determined as −28.

3.4. Reproductive Operator

Crossover operator is different for each IU In case of fiber, matrix, and volume fraction IU, a classic one-point crossover operator adapted to the different possible cases is used. However, for geometry IU, the classic crossover operator is not used because it is very static in changing the number of layers. To avoid this problem, a dynamic crossover operator with a different crossover point for each parent is proposed. Such crossover points are determined by generating two integer numbers from a continuous uniform distribution. The first number lies between 0 and the number of laminas of the first laminate, , and the second between 1 and the number of laminas of the second laminate, , plus 1. Then, the parent laminates are split into two parts, with the child joining together the left part of the first parent and the right part of the second parent being formed. Depending on the location of these crossover points, and can be obtained different cases shown in Table 3.

3.5. Local Search Operators

A set of local search operators are defined, following previous applications of a variable neighbourhood search-based model [68]. These operators allow a lamina to be added or removed, to change its orientation, type of matrix, and type of fiber or volume fraction. Two new operators have been defined in order to better exploit the region being explored in a given time. The operators proposed in the model are summarized in Table 4.

3.6. Implementation

The MA proposed model is compared with other heuristics in order to verify its advantages. These heuristics (detailed in Section 2) are GA, VNS, SA, and TS. All heuristics use the same representation of the problem proposed in this paper. Experiments were performed in the Picasso Computer at University of Malaga (512 IBM PowerPC 970 processors, 2994.04 GFlops).

In Tables 5 and 6, characteristics of the materials (fibers and matrices) used in the tests for the laminates design are shown.

4. Experiments and Results

The problem is to determine the composition and thickness of a laminated plate under a distributed and loading. Table 7 presents the statistics for 21 different loading cases () and the 5 heuristics () indicated in Section 3.6, with 500 simulations each. For each model, the results shown correspond to the smallest number of laminas of the material obtained, , the number of times this smallest number is obtained, NL, and its average . Also, the best (maximum) fitness obtained, , and its average are presented.

4.1. Statistical Significance Test

In order to explore the result of simulations a statistical significance test is applied. The choice of test depends upon what it is intended to study, as Demšar [70] showed that nonparametric tests are safer and more appropriate than parametric tests for comparisons between two or more algorithms on multiple data sets.

A null or no-effect hypothesis is to be formulated prior to the application of the test. It often supports the equality or absence of differences among the results of the algorithms and enables alternative hypotheses to be raised that support the opposite [71]. The null hypothesis can be represented by and the alternative hypothesis by . The application of the tests leads to the computation of a statistic, which can be used to reject the null hypothesis at a given level of significance . It is also possible to compute the smallest level of significance that results in the rejection of the null hypothesis. This level is the value, which is the probability of obtaining a result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The use of values is often preferred over using only fixed levels since they provide cleaner measures of how significant the result is (the smaller the -value, the stronger the evidence against the null hypothesis is) [72].

The Friedman test [73, 74] is a nonparametric test with multiple comparisons that aims to detect significant differences between the behaviour of two or more algorithms. The null hypothesis for Friedman’s test states equality of medians between the populations. The alternative hypothesis is defined as the negation of the null hypothesis, so it is nondirectional. The first step in calculating the test statistic is to convert the original results to ranks. They are computed using the following procedure ( is the number of problems considered where is its associated index and is the number of algorithms included in the comparison where is its associated index):(1)Gather observed results for each algorithm/problem pair.(2)For each problem , rank values from 1 (best result) to (worst result). Denote these ranks as ().(3)For each algorithm , average the ranks obtained in all problems to obtain the final rank .

Thus, it ranks the algorithms for each problem separately; the best performing algorithm should have the rank of 1, the second best rank 2, and so forth. Under the null hypothesis, which states that all the algorithms behave similarly (therefore their ranks should be equal), the Friedman statistic can be computed aswhich is distributed according to a chi-square distribution with () degrees of freedom.

Iman and Davenport [75] derived a less conservative alternative with statistic distributed according to following -distribution with () and ()() degrees of freedom:

Table 8 depicts the rank computed through the Friedman test for all heuristics considered in the exhaustive analysis (in our case of study and ) according to the average fitness, , of each distributed and loading. As can be deduced from Table 8, MA with a rank of 1 is the best performing algorithm, whereas SA with a rank of 4.948 is the worst.

The Friedman statistic computed by (4) (distributed according to chi-square with 4 degrees of freedom) is 82.552381 and its value is and the Iman-Davenport extension computed by (5) (distributed according to -distribution with 4 and 80 degrees of freedom) is 1140.526316 and its value is . In both cases, the value is less than the significance level ; and for which there is no difference in rankings for these 5 heuristics is rejected.

Rejection of must be followed by a post hoc procedure to characterize the differences between algorithms. The aim of the application of post hoc procedures is to perform a comparison considering a control method and a set of algorithms. A family of hypotheses can be defined, all related to the control method. Then, the application of a post hoc test can lead to obtaining a value which determines the degree of rejection of each hypothesis [76]. A family of hypotheses is a set of logically interrelated hypotheses of comparisons which, in () comparisons, compares the () algorithms of the study (excluding the control) with the control method, whereas, in () comparisons, it considers the possible comparisons among algorithms. Therefore, the family will be composed of or hypotheses, respectively, which can be ordered by its value, from lowest to highest.

In our comparison, the appropriated post hoc procedure is the Nemenyi test [77] because multiple algorithms on multiple data sets are compared. In our case of study ( and ) there are 10 hypotheses for possible comparisons among algorithms. The values for the Nemenyi test are shown in Table 9 ordered by values, where in each row the first heuristic has better (lower) average ranking than the second. If the corresponding value is smaller than , then the first heuristic is significantly better than the second ( hypothesis rejected).

As can be deduced from Table 9, the MA proposed model significantly performs better than the others because the hypothesis is rejected in all comparative pairs.

4.2. Analysis of Results

As indicated in Section 4.1, the MA proposed model is significantly better than the others heuristics respect to the fitness function. Moreover, in many cases, the proposed model obtains a material with the lowest number of layers and such occurrences are greater than the solutions obtained by the others heuristics (see Table 7). Figure 5 shows the average number of laminas corresponding to the laminates obtained with all heuristics.

As an example of results, in Table 10 are shown the specifications of the best laminate obtained with the 5 heuristics in the loading case , .

Finally, Figure 6 shows graphically the different coefficients ( or Tsai-Wu, , , and ) corresponding to the laminas of the best laminate obtained with the MA-based model in the loading case shown in Table 10. In this case, a large degree of uniformity of the coefficients around 1 can be observed. It indicates an excellent exploitation of all the laminas.

5. Application to Specific Cases

Previously, the proposed model has been applied to general cases. In this section, the proposed model is applied to two significant specific cases comparing the results with another author’s solution.

5.1. Minimum Thickness Design

Le Riche and Haftka [26] developed a GA for minimum thickness composite laminate design. They consider a graphite-epoxy plate with the load case with and . The characteristics of the best laminated obtained by authors are shown in Table 11.

The laminate provided by the MA proposed model, under the same load case, is shown in Table 12.

5.2. Stacking Sequence Design

Liu et al. [54] developed a permutation GA for stacking sequence design of composite laminates. They consider a graphite-epoxy laminate with the load case with and ; . The characteristics of the best laminated obtained by authors are shown in Table 13.

The laminate provided by the MA proposed model, under the same load case, is shown in Table 14.

The laminate represented in Tables 12 and 14 has been validated using a finite elements simulation software. To do so we employed the ANSYS 17.0 software package; additionally, the SHELL281 was used as the element of validation code, maintaining the same conditions and loadings when possible, and its characteristics can be consulted in [78].

6. Conclusions

In this paper, we have shown that the use of techniques based on systematic exploitation of knowledge of the problem, together with the combination of metaheuristics based on population as well as in local search, offers more effective and dynamic methods for the solution of laminated polymers design. To this end, a study of the main techniques used in the design of laminated polymers was firstly performed, in order to carry out a comparative analysis.

A Memetic Computing-based model has been presented for the design of Symmetric Laminated Composites and Structures. This model implements a general encoding for the design of composites and a fitness function that has taken into account economic and safety criteria in design. Also, a dynamic reproductive operator is presented, in which the classic crossover operator is modified in order to improve the dynamics in the change of the number of layers.

Finally, a set of local search operators are implemented. These allow a lamina to be added or removed, to change its orientation, type of matrix, type of fiber, or volume fraction. Two new operators have been defined and added to the set to improve the exploitation of the solutions in the region of the search space.

The proposed Memetic Computing model has been subjected to a broad analysis and applied in two specific cases. Firstly, the model has been applied to the design of a plate under distributed and loading, and the results have been compared with those obtained by other well-known design methods. In most cases, the minimum number and average number of laminas are lesser. Also, the proposed model has been compared with two significant models found in the literature, obtaining comparable results. These results show that the proposed model is a general and flexible design method for Symmetric Laminated Composites and Structures. We consider that these results can be applied to real-life scenarios with a high reliability. Finally, all solutions obtained by the MA proposed model in Sections 5.1 and 5.2 have been validated by the ANSYS software package using the same conditions and load system.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was partially supported by the Department of Research, Area of Publications, Critical Mass and Patents of the University of Guayaquil, Ecuador.