Abstract

This paper presents an automatic system of neural networks (NNs) that has the ability to simulate and predict many of applied problems. The system architectures are automatically reorganized and the experimental process starts again, if the required performance is not reached. This processing is continued until the performance obtained. This system is first applied and tested on the two spiral problem; it shows that excellent generalization performance obtained by classifying all points of the two-spirals correctly. After that, it is applied and tested on the shear stress and the pressure drop problem across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer (LLDPE) at C. The system shows a better agreement with an experimental data of the two cases: shear stress and pressure drop. The proposed system has been also designed to simulate other distributions not presented in the training set (predicted) and matched them effectively.

1. Introduction

Neural networks are widely used for solving many problems in most science problems of linear and nonlinear cases [19].Neural network algorithms are always iterative, designed to step by step minimise (targeted minimal error) the difference between the actual output vector of the network and the desired output vector, examples include the Backpropagation (BP) algorithm [1012] and the Resilient Propagation (RPROP) algorithm [1315].

Neural classifiers can deal with many multivariable nonlinear problems for which an accurate analytical solution is difficult to obtain.It is found however that the use of neural classifiers depends on several parameters that are crucial to the accurate predictions of the properties sought. The appropriate neural architecture, the number of hidden layers, and the number of neurons in each hidden layer are issues that can greatly affect the accuracy of the prediction. Unfortunately, there is no direct method to specify these factors as they need to be determined on experimental and trial basis [16].

The two-spiral benchmark was considered as one of the most difficult problems in two-class pattern classification field due to the complicated decision boundary [17]. It is extremely hard to solve using multilayer perceptron models trained with various BP algorithms [18]. Thus, it is a well-known benchmark problem for testing the quality of neural network classifiers [19].

The effects of pressure on the viscosity and flow stability of one of the commercial grade polyethylenes (PEs) which is linear low-density polyethylene copolymer have been studied. The range of shear rates considered covers both stable and unstable flow regimes. “Enhanced exit-pressure” experiments have been performed attaining pressures of the order of Pa at the die exit. The necessary experimental conditions have been clearly defined so that dissipative heating can be neglected.

Very high pressures can be exerted on polymers during processing. At these pressure levels, polymer melt properties, and flow stability, evolve according to laws that are different from those used at moderate pressures. In the following work by Couch and Binding [20], temperature and pressure dependence of shear stress can be modeled. Carreras et al. [21] studied these effects experimentally using different rheometers. The data obtained by Carreras et al. [21] is chosen to be carried out using the neural networks depending on the BP and RPROP algorithms.

BP is the most widely used algorithm for supervised learning with multilayered feed-forward networks [22], and it is very well known, while the RPROP algorithm is not well known and described in some detail in Section 3.1.

The RPROP algorithm was faster than the BP [23, 24]. Therefore, the RPROP is chosen to be carried out in this study. The present work offers an efficient neural network that is used to predict the unknown data of shear stress and the pressure drop across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer at C. The following sections provide a brief introduction to NNs, describe the selected NN structure, training data, and discuss the results.

2. The Studied Problems

2.1. Two Spirals

The two-spiral problem is a classification task that consists of deciding in which of two interlocking spiral-shaped regions a given coordinate lies. The interlocking spiral shapes are chosen for this problem because they are not linearly separable. Finding a neural network solution to the two-spirals problem has proven to be very difficult when using a traditional gradient-descent learning method such as backpropagation, and therefore it has been used in a number of studies to test new learning methods; see for instance, [25, 26].

To learn and solve this task, a training set consists of 194 preclassified coordinates. Half of the coordinates is located in one spiral-shaped region and marked with triangles, and the other spiral-shaped region marked with circles. The coordinates of the 97 triangles are generated using the following equations, where . The coordinates of the circles are generated simply by negating the coordinates of the triangles [27]

When performing a correct classification, the neural network takes two inputs corresponding to an coordinate, and produces a positive signal if the point falls within the spiral that drawn using triangles and a negative signal if the point falls within the spiral that drawn by circles [27].

2.2. Linear Low-Density Polyethylene Copolymer

The studied problem consists of two dependent parts, the first is the pressure drop across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer at C and the second is the shear stress dependence on shear strain of the same fluid at C which represent flow curves. Each part contains seven groups of data. Each group has some samples as specified in [21]. The group data number 5 is specified to be predicted for each part, while the other six groups are chosen as patterns for training. The six groups for each part are prepared as input patterns of the proposed neural network algorithm.

This problem has two inputs (mean pressure and share rate), and single output (pressure drop   (Pa)) in each part, because there is only one target value associated with each input vector; see Figure 1.

3. Neural Networks

Neural networks consist of a number of units (neurons) which are connected by weighted links. These units are typically organised in several layers, namely, an input layer, one or more hidden layers, and an output layer. The input layer receives an external activation vector and passes it via weighted connections to the units in the first hidden layer. Figure 2 shows input layer with R elements, one hidden layer with S neurons, and output layer with one element. Each neuron in the network is a simple processing unit that computes its activation with respect to its incoming excitation, the so-called net input , where denotes the set of predecessors of unit denotes the connection weight from unit to unit, and is the unit bias value. The activation of unit , , is computed by passing the net input through a non-liner activation function. The tan-sigmoid function is applied in the proposed work as follows:

3.1. RPROP Algorithm

In the RPROP algorithm, each weight () is computed by its individual update-value (), which determines the size of the weight update. This adaptive update-value evolves during the learning process based on its local sight on the error function E, according to the following learning-rule [13]:

The size of the weight change is exclusively determined by the weight-specific update-value . Every time the partial derivative of the corresponding weight changes its sign, the update-value is decreased by the factor . This indicates that the last update was too big and the algorithm jumped over a local minimum. On the other hand, if the derivative retains its sign the update-value is slightly increased by the factor in order to accelerate convergence in shallow regions. Once the update-value for each weight is adapted, the weight-update is changed as follows: if the derivative is positive (increasing error) the weight is decreased by its update-value, if the derivative is negative, the update-value is added. Then, the weights are updated as in (6) using update-values from see [23].

As mentioned in the end of Section 1, the RPROP algorithm was faster than the BP, the main reason for the success using this algorithm is that the size of weight-step is only dependent on the sequence of signs, not on the magnitude of the derivative as showed by Riedmiller and Braun [23]. The RPROP algorithm has fewer parameters that need to be evaluated and promises to provide the same performance as an optimally trained network using the BP algorithm.

3.2. Proposed System

The proposed system is designed to work in automatic way starting with random initial weighed and biases values. Many NN experiments are done to have the optimal NN results of the two-spiral problem, by repeating the same experiment using the same NN architecture (number of hidden layers and neurons). Therefore, 500 NN experiments and 400 neurons are specified as maximum numbers of this system. The system stops when the best network is obtained. This system is trained and tested using different parameters, for instance, changing the number of hidden layers, neurons, and epochs. The experimental data of the two physical problems (the shear stress and the pressure drop problem across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer (LLDPE) at C) are smaller. Therefore, the optimal required numbers of hidden layers and neurons are specified by NN experiments depending on trial and error as applied, for instance, in [28, 29]. If the required performance is not reached in the test process, this system is continued to do another new experiment. When the last NN experiment is reached, the number of neurons for each hidden layer is incremented in sequence way and the process of another 500 NN experiments starts again. The incremented process is continued until the required performance is reached. If not, and the maximum number of neurons is reached, an alternative way is started; the number of hidden layers is incremented by one and new 500 NN experiments start again with initializing the number of neurons for these hidden layers. The system is continuous until excellent training and prediction is reached. The details of the proposed system are shown in Figure 3.

This proposed system is based on RPROP algorithm using tan-sigmoid transfer function in the hidden layers and a linear transfer function in the output layer. More hidden layers or neurons require more computations, but allow the network to solve complicated problems. Therefore, many tries are done to find the best network that uses low number of hidden layers and low number of neurons.

After the training, in the test process of the two-spirals problem, it is noticed that the chosen algorithm using two hidden layers with 77 neurons for each one is very effective for reaching the optimal classification; see Figure 4(a). While in the test process of the both pressure drop and shear stress problems, it is found that one-hidden layer having 20 neurons is enough for reaching the optimal performance as specified in Figure 4(b). We first set up in Figure 4 the network in the proposed system with random weights and biases values, where IW represents the input weights, LW and LW mean the layer weights, b is the biases of the input layer, and b and b are the biases of the output layers. The obtained weights and biases of the best trained network for pressure drop and shear stress problems are shown in Table 2.

4. Results

The proposed system is carried out on three problems. They are two-spiral problem, the pressure drop and shear stress across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer at C. The descriptions of the obtained results are showed in the following three subsections.

4.1. Two-Spiral Problem

This problem is used to learn a mapping function (two inputs and one output) which distinguishes points on two intertwined spirals. This is one of the typical difficult problems due to its extreme nonlinearity. The proposed system was first trained on 194 points of the - coordinates using one hidden layer. The obtained performance was 92.8% at 2157 epochs and 400 neurons; see Figure 6(a) and Table 1. The misclassified points are 6 in the triangles spiral and 8 in the circles-spiral.

The training process is continued with increasing one hidden layer more. The obtained performances are 95.9%, 99.5%, and 100% using 50, 60, and 77 neurons, respectively; see Figures 6(b)6(d) and Table 1. The numbers of training epochs for these obtained performances are 10000, 10000, and 5896, respectively (see Figure 5). It is found that all 194 training patterns are classified correctly using the last network architecture.

4.2. Linear Low-Density Polyethylene Copolymer

The above mentioned details of the proposed system were applied and simulated to the data of the pressure drop and shear stress across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer at C, with one hidden layer.

The system was trained using the chosen neural network on six cases of different mean pressures for each of the pressure drop and shear stress as a function of shear rate. These values of mean pressure are 1, 100, 200, 300, 500, and 600 multiplied by 105 Pa. The performances of the obtained networks are shown in Figure 7. The obtained networks were tested for choosing the best one. This network was tested on the above mentioned six cases and used for predicting the case at mean pressure value,  Pa. Figure 8 shows the neural networks results of the six cases training and one predicted case for pressure drop which denoted by with shear rate. Figure 9 shows also the neural networks results of the six cases training and one predicted case for shear stress which denoted by with shear rate which represent flow curve. It was observed that these figures illustrate an excellent performance in two cases (the training and prediction). These results of the dependence of pressure drop and shear stress on shear rate at different mean pressures are presented in the following two subsections.

4.2.1. Pressure Dependence on Shear Rate

Figure 8 shows the six cases tested and one predicted data of the pressure drop with shear rate compared to the experiential data for linear low-density polyethylene copolymer at C.

4.2.2. Shear Stress Dependence on Shear Rate (Flow Curve)

Figure 9 shows the six cases tested and one predicted data of the shear stress with shear rate compared to the experimental data which represent flow curve for linearlow-density polyethylene copolymer at C.

5. Conclusion

The proposed system is automatically designed to find the best network that has the ability to have the best test and prediction. This technique is started by doing 500 NN experiments with incrementing the number of neurons for each hidden layer. In the incrementing process, another new 500 NN-experiments are carried out in alternative way; the number of hidden layers is incremented by one with initializing the number of neurons for these hidden layers and new 500 NN-experiments are done. This process is continued until the required performance is reached. Therefore, many tries are automatically done to find this network, using low number of hidden layers and neurons.

The obtained performance of the two-spiral problem is low when using one hidden layer in the network architecture, although the number of neurons increased up to 400. The performance is improved using two hidden layers, it is 95.9% with 50 neurons, 99.5% with 60 neurons, and 100% with 77 neurons. In the best performance, all points of the two-spiral problem are correctly classified.

In the other two problems, it was found that one hidden layer with 20 neurons is enough for reaching the optimal solution. The trained NN using this system shows excellent results matched with the experimental data in the two cases of shear stress and pressure drop problems. The NN technique has been also designed to simulate the other distributions not presented in the training set and matched them effectively.

The NNs simulation using RPROP algorithm is powerful mechanism for classifying all points of the two spirals, and for the prediction flow curves (dependence of shear stress on shear rate) and pressure drop dependence of shear rate at a certain value of mean pressure across short orifice die for linear low-density polyethylene copolymer at C.