Abstract

In this work, a neural controller for wind turbine pitch control is presented. The controller is based on a radial basis function (RBF) network with unsupervised learning algorithm. The RBF network uses the error between the output power and the rated power and its derivative as inputs, while the integral of the error feeds the learning algorithm. A performance analysis of this neurocontrol strategy is carried out, showing the influence of the RBF parameters, wind speed, learning parameters, and control period, on the system response. The neurocontroller has been compared with a proportional-integral-derivative (PID) regulator for the same small wind turbine, obtaining better results. Simulation results show how the learning algorithm allows the neural network to adjust the proper control law to stabilize the output power around the rated power and reduce the mean squared error (MSE) over time.

1. Introduction

Green directives in many countries promote the use of renewable energies to improve the sustainability of worldwide energy systems. Indeed, the number of terawatts produced by clean energies is growing each year [1]. Among clean energies, wind is the second most used natural resource after hydropower, due to its high efficiency. Although a mature technology, there are still many engineering challenges related to wind turbines (WTs) that must be addressed [2].

Depending on the type of wind turbine, different control actions can be applied, namely: the pitch angle of the blades or rotor control, which is used as a brake to maintain the rated power of the turbine once the wind surpasses certain threshold; the yaw angle, which is used to change the attitude of the nacelle to match the wind stream direction; and finally, the generator speed control that seeks to reach the optimal rotor velocity when the wind is below the rated-output speed. The WT controller is in charge of managing all of these mechanisms to optimize the efficiency of the system while the safety must be guaranteed under all possible wind conditions. This fact may be even more critical for floating offshore wind turbines (FOWTs) as it has been proved that the control system can affect the stability of the floating device [3, 4].

The pitch control of a wind turbine is a complex task itself due to the highly nonlinear behaviour of these devices, the coupling between the internal variables, and because they are subjected to uncertain and varying parameters due to external loads, mainly wind, and in the case of FOWT, waves and currents also. These reasons have led to explore intelligent control techniques to tackle these challenges [5]. Among traditional control solutions, sliding mode control has been recently applied with successful results, such as in [6], where a PI-type sliding mode control (SMC) strategy for permanent magnet synchronous generator- (PMSG-) based wind energy conversion system (WECS) uncertainties is presented. Nasiri et al. [7] proposed a supertwisting sliding mode control for a gearless wind turbine by a permanent magnet synchronous generator. A robust SMC approach is also proposed in [8], where authors use the blade pitch as control input, in order to regulate the rotor speed to a fixed rated value. In [9], an adaptive robust integral SMC pitch angle controller and a projection type adaptation law are synthesized to accurately track the desired pitch angle trajectory, while it compensates model uncertainties and disturbances.

Regarding intelligent control, fuzzy logic has been widely applied to the wind turbine pitch control. For example, in [10], pitch angle fuzzy control is proposed and compared to a PI controller for real weather characteristics and load variations. Rocha et al. [11] applied a fuzzy controller to a variable speed wind turbine and compared the results with a classical proportional controller in terms of system response characteristics. Rubio et al. [12] presented a fuzzy logic-based control system for the control of a wind turbine installed on a semisubmersible platform. But application of neural networks to turbine pitch control is scarcer, maybe due to the lack of real data to train the network [13]. However, Asghar and Liu [14] designed a neurofuzzy algorithm for optimal rotor speed of a wind turbine. In [15], artificial neural network-based reinforcement learning for WT yaw control is presented. In [5], a passive reinforcement learning algorithm solved by particle swarm optimization is used to handle an adaptive neurofuzzy type-2 inference system for controlling the pitch angle of a real wind turbine. In [16], a robust H∞ observer-based fuzzy controller is designed to control the turbine using the estimated wind speed. Two artificial neural networks are used to accurately model the aerodynamic curves. From a different point of view, in [17], the authors proposed an information management system based on mixed integer linear programming (MILP) for a wind power producer having an energy storage system and participating in a day-ahead electricity market.

In this work, we have focused on the pitch control of a small wind turbine. Based on the neural control strategy proposed in [18], we have extended it to deal with the dynamics of the pitch actuator. Besides, the derivative and the integration of the power error have been added as inputs to the learning algorithm. This way the error variation and the past error values are considered and used to update the weights of the neural network; this helps to accelerate the learning process. The main contribution of this paper is twofold. On the one hand, a radial basis network (RBF) wind turbine pitch controller is designed and implemented. This controller uses the output power to update the weights of the neural network in an unsupervised way. On the other hand, a detailed analysis has been carried out on how the configuration of the neural network, the learning algorithm, and the controller parameters affect the control performance and the evolution of the error. Another advantage of the approach here presented is that, in contrast to traditional controllers which have different control schemes for different wind speed regions, only one controller is used for all operational regions of the wind turbine.

The rest of the paper is organized as follows. Section 2 describes the model of the small wind turbine used. Section 3 explains the neural controller architecture and the unsupervised learning strategy. The results for different neural network configuration and learning parameters are analysed and discussed in Section 4. The paper ends with the conclusions and future works.

2. Wind Turbine Model Description

The model of a small 7 kW wind turbine is developed. The ratio of the gear box is set to 1, so the rotor torque is the same as the mechanical torque of the generator, (Nm), given by the following equation [19]:where is the power coefficient; ρ is the air density (kg/m3); A is the area swept by the turbine blades (m2); is wind speed (m/s); and is the angular rotor speed (rad/s). The blade swept area can be approximated by A = πR2, where R is the radius or blade length.

The power coefficient is usually determined experimentally for each turbine. There are different expressions to approximate ; in this case, it has been calculated as a function of the tip speed ratio λ and the blade pitch angle θ (rad):where the values of the coefficients to depend on the characteristics of the wind turbine. The pitch angle θ is defined as the angle between the rotation plane and the blade cross section chord, and the tip speed ratio is given by the following equation:

From equation (3), it is possible to observe how decreases with the pitch angle. Indeed, when θ = 0 (rad), the blades are pitched so the blade is all out and producing at its full potential, but with (rad), the blades are out of the wind.

The pitch actuator is modelled as a second-order system. This assumption is widely used to model pitch systems in wind turbines and other mechanical actuators [20]. In this case, is the input of the pitch actuator and θ is its output:

Thus far, the model is focused on the mechanical aspects of the system. But dynamics of the generator combine the mechanical and electrical domains. The relation between the rotor angular speed and the mechanical torque Tm in a continuous current generator is given by the following expressions [21]:where is the electromagnetic torque (Nm), J is the rotational inertia (kg·m2), is the friction coefficient (N·m·s/rad), is a dimensionless constant of the generator, is the magnetic flow coupling constant (V∙s/rad), and is the armature current (A).

The armature current of the generator is then given by the following equations:where is the armature inductance (), is the induced electromotive force (), is the generator output voltage (), and is the armature resistance . For simplicity, it is commonly assumed that the load is purely resistive, given by . Thus, , and the output power () is .

By the combination of the previous equations (1)–(9), the following expressions summarize the dynamics of the wind turbine.

In this work, we have focused on controlling the output power by means of the pitch angle, so the input control variable is and the controlled output variable is (boldfaced in equations (10)–(15). The state variables are , and .

The wind turbine parameters used during the simulations are shown in Table 1 [19].

3. Neural Pitch Control Strategy

3.1. Neural Controller Architecture

The architecture of the proposed wind turbine neural controller is shown in Figure 1. The error is the difference between the power reference signal (rated power) and the power output. The nominal power of this wind turbine is 7 kW. Power error, , and its derivative, , are saturated to maintain their values within a suitable range; the saturated signals are and , respectively. They are the inputs of the radial basis neural network that is used to implement the controller. The output of the neural network is biased for and goes through a saturation block to adapt it to the range (rad). The result of this process is the signal that will be used as pitch reference of the wind turbine control.

The neural network must learn the control law , which will be able to stabilize the wind turbine output power around its nominal value. This function is not known beforehand. In other control schemes, the weights of the RBF network are updated using supervised learning. That requires a known input/output dataset to train the neural network. This way it generates the expected output when it receives a similar input to the ones used for the training. However, in our case, there are no labelled output data to train the network.

If we knew the correct pitch control signal for each and , we would know the appropriate control law, and we would not need a neural network to learn it. For this reason, it is not possible to use supervised learning. That is why in this approach the learning algorithm receives the error signal , its derivative, and its integral and combines them to generate the new weights of the neural network.

The equations of this neurocontrol strategy are the following:where is the control period (s); the maximum and minimum values of the variables, , are the constants that will allow to adjust the range of the controller, with the constraints and ; is the RBF function; and denotes the function of the learning algorithm.

The MIN and MAX operators in equations (18), (19), and (22) are applied to maintain the signal value within boundary conditions. The expression sets as the upper boundary; as the lower limit; and as the signal to be saturated. The MAX operator holds beyond the lower bound. The output of the MAX operator is kept below the upper limit by the MIN operator.

All the variables in equations (16)–(22) are updated each second; otherwise, their values remain constant.

3.2. Setting Up RBF

The aim of the RBF is to compute the bidimensional function which implements the control law that it is able to stabilize around . As it is well known, any derivable continuous function can be approximated by the sum of exponential functions. In this work, we take advantage of this property to approximate the control law by the RBF neural network. In order to map the input space to the output space, we discretize the bidimensional input space of the neural network applying a gridding. Figure 2 shows the  ×  grid. The centres of the neurons are initialized to the intersection points of the grid lines. This will set the precision of the error.

The number of rows and columns, the horizontal and vertical length of the cells, and , respectively, and the number of neurons are related by the following expressions:where the number of horizontal lines is , i.e., rows, and the number of vertical lines is . In order to ensure that a horizontal line and a vertical line intersects the point (0, 0), and must be odd and bigger than 1.

Once and are determined, the centre of the neurons is obtained by the following equation:where is the centre of the neuron.

The output of the RBF neural network (20) is then given by the following expressions (where the variable ti has been omitted for sake of clarity):where is a normalized distance measure, is number of neurons in the hidden layer, is the weight of the i-neuron, and is the width of the i-neuron activation function, which is normally the same for all neurons. The width of the neuron is also related to the error accuracy. The normalized distance (29) is calculated by the 2-D Euclidean distance once each 1-D distance has been normalized to the range . The range of is , so division by normalizes it to , whereas the range of is , and thus division by normalizes it to . This way the output range of (29) is .

3.3. Unsupervised Learning Algorithm

The parameters to be updated by a learning algorithm in an RBF neural network are the centres of the RBF neurons, the parameters, and the output weights. As explained before, the centres of the neurons are equally distributed in all the input space. In addition, in this work, as is common, it is assumed that the entire input space is equally important when obtaining the output of the network; therefore, the parameters are set in advance to the same value for all the neurons. Thus, the learning algorithm only has to update the weights.

As said before, many control schemes with RBF neural networks use supervised learning to update the weights but this is not the case. There are no labelled output data to train the network, so the neural network must learn a control law previously unknown in an unsupervised way. This learning procedure is as follows.

The input space has been pseudodiscretized, placing the RBF neurons at the centres of the grid. Given a network input pair , the neuron closest to this pair will have the biggest contribution to the output value of the mapping. Although it will not be the only neuron that influences the output, the contribution decreases with the distance and increases with the width of the activation function.

If the centres of the neurons are separated enough and the width of the activation function is correctly selected, the contribution of the surrounding neurons may be neglected and all points in the input space are discretized to the centre of its closest neuron. Therefore, by updating the weight of the -neuron, it is possible to adjust the output value of the input pair , due to the fact that in these points the value of the exponential function is 1. Thus, the closer the input pair is to the centre of some neuron, the better is the approximation of the RBF function . The learning algorithm will be in charge of updating the weights of the network based on the output power errors, adjusting the mapping of the function. In the output layer of the neural network, all the partial contributions of the neurons are linearly combined to obtain the output value (28).

In order to illustrate this unsupervised learning procedure, Figure 3(a) shows an example of the initial surface of the weights of the neural network with all set to 1. Figure 3(b) shows the corresponding pitch control law at the output of the before learning, and Figure 4(a) presents the control surface after applying the learning strategy, with the final values of the weights. The pitch control law, before and after the learning, is shown in Figures 3 and 4(b), respectively. It is possible to see, as expected, that positive errors increase the weights, bending upwards the surface and thus incrementing the output value of the neural network. This means reducing the pitch angle reference, , and enlarging the output power.

In this work, we take as starting point the typical supervised learning strategy of an RBF to reduce the error at each iteration, given by equation (27), where is the expected output value and is the current output value.

As is not available, in order to make the power error zero, the term is replaced by . Equation (28) details how the function of equation (21) is then calculated, that is, how the weights of the RBF neural network are modified.where is the learning rate and are positive constants. As it may be observed, the exponential term is the same as in equations (25) and (26).

The following pseudocode details the unsupervised algorithm which updates the weights of the RBF network (Algorithm 1):

% Initialization
Xmin ⟵ PerrMin
Xmax ⟵ PerrMax
Ymin ⟵ dotPerrMin
Ymax ⟵ dotPerrMax
IncX ⟵ (Xmax − Xmin)/(Nx − 1)
IncY ⟵ (Ymax − Ymin)/(Ny − 1)
M ⟵ Nx  Ny
for i = 0 toM − 1
  cNetX ⟵ (i DIV Ny)  IncX + XMin
  cNetY ⟵ (i MOD Ny)  IncY + YMin
  cNet (i) ⟵ (cNetX, cNetY)
  W (i) ⟵ 1
  Fold (i) ⟵ 0
end for
F ⟵ Fold
tconOld ⟵ 0
pitchCon ⟵ 0
% Execute algorithm
(errPow, derrPow, errPowSum) ⟵ MODEL (0)
for t = 0 to tEnd
   if t ≥ tconOld + Tc then
      errPowSat ⟵ MIN (Xmax, MAX (Xmin, errPow))
      derrPowSat = MIN (Ymax, MAX (Ymin, derrPow))
      [RBFout, F] = RBF(cNet, W, errPowSat, derrPowSat)
      Fold ⟵ F
      pitchCon ⟵ (pi/4) − RBFout
      if ABS (errPowSat) < minErr then
         Winc ⟵ 0
      else
        errM ⟵ errPowSat  KP + derrPowSat  KD + errPowSum  KI
        Winc ⟵ Fold  errM mu
      end if
      W ⟵ W + Winc
      tconOld ⟵ t
    end if
    (errPow, derrPow, errPowSum) ⟵ MODEL (pitchCon)
end for

Here, M is the number of neurons, is an array with the weights, Nx is the number of neurons in the x-axis of the grid input space, and Ny is the number of neurons in the y-axis of the grid input space. A learning threshold, minErr, is defined so errors below that value are discarded. The centres of the neurons are represented in the array cNet. The tuning parameters of the learning rate are mu, KP, KD, and KI; the control sampling time is Tc, F is an array with the output of each exponential function of the RBF before the addition of all the neurons (28), and Fold is an array that saves the previous value of F.

The model of the WT, MODEL(), receives as input the pitch control reference. The function RBF() calculates (28), where DIV() is the integer division, MOD() is the module operator, ABS() is the absolute value operator, and MIN() and MAX() are the minimum and maximum functions. Therefore, the external parameters of the algorithm are PerrMin, PerrMax, dotPerrMin, dotPerrMax, Nx, Ny, mu, KP, KD, KI, and minErr.

At the beginning of the procedure, all variables are initialized, and the centres of the RBF are calculated. Then, the simulation is run each Ts second. The controller is updated each Tc second; therefore, Tc must be larger than Ts. Each control sample time, Tc, the output of the RBF, RBFout, and the WT pith reference, pitchCon, are obtained. If the error is above the threshold, a combination of the error, its derivative, and its integration is calculated (variable errM). Then, the array with the increments of the weights, Winc, is obtained from the previous F array, Fold, and the current error measurement, errM.

4. Performance Analysis of the Neurocontrol Strategy

A performance analysis of this unsupervised neurocontrol strategy has been carried out under different network configurations and varying some parameters of the learning algorithm and of the pitch control law. The software Matlab/Simulink has been used. The duration of each simulation is 100 s. In order to reduce the discretization error, a variable step size has been used for the simulation experiments, with maximum step size set to 10 ms. The control sample time has been fixed to 100 ms.

The neurocontroller performance is compared with a PID regulator. In order to make a fair comparison, the PID output has been scaled to adjust its range to and it has been also biased by . The equation of the biased PID controller is expressed as follows:

The wind turbine nominal power is 7 kW, and thus the reference  = 7000 W. The tuning parameters have been determined by trial and error, and their values are [1, 0.2, 0.9], respectively. The parameter minErr of the learning algorithm is set to 15.

The performance of the controllers has been evaluated with the MSE, the mean value, and the variance, calculated aswhere is the simulation time and is the sampling time that is necessary due to the variable step size that has been used.

Figure 5(a) shows the output power when different strategies are applied. The blue line represents the output when the pitch is permanently set to zero, the red one represents the output when pitch angle is set to feather position (90°), the yellow line is the response with the PID, the purple one is the response with the neural controller, and finally the green line represents the rated power. Figure 5(b) shows a zoom of the previous figure to see better the variations of the signals. In this experiment, the wind is randomly generated with a speed between 11.5 and 14 m/s, the RBF has 25 neurons in the hidden layer, is set to 0.1, the maximum and minimum values of the parameters are set to [−1000, 1000, −400, 400], and learning rate is 0.00011.5.

As shown in Figure 5, when the pitch is set to zero, the output power is always bigger than the rated power because the blades harness the maximum power of the wind. As expected, when the pitch is fixed to feather, the opposite happens, as the surface which faces the wind is minimum. Another interesting outcome is that the proposed neurocontroller is not only able to stabilize the output power around the nominal value, but its performance is better than the PID, particularly up to 50 s, and it is less oscillatory.

As the output power depends on the wind, different simulations were carried out varying the wind speed. The configuration of the neural network and the learning algorithm is the same as in the previous experiment. Figure 6 shows the influence of the wind speed regarding the mean square power error (MSE). The red bar is the MSE with the neural controller, and the blue one is the MSE with the PID. As expected, the higher the wind, the larger the error. For all the ranges of wind speed, the neurocontrol strategy has been proved to be better than the PID.

Table 2 summarizes the detailed results of the simulation experiments with different wind speeds between 12.2 and 12.8 m/s. At a wind speed below 12.2 m/s, the stabilized output power is always lower than 7 kW, even with pitch angle set to 0. With a wind speed over 12.8 m/s, the steady output power is always higher than 7 kW even when the pitch is set to 90°. In all the cases, the error is smaller with the neural controller than with the PID but for 12.5, 12.7, and 12.8 m/s, the mean obtained with the PID is slightly smaller.

A sinusoidal wind signal has been also tested. The average wind speed is 12.5 m/s with an amplitude of 0.6 m/s and a period of 50 s. The result of the experiment is shown in Figure 7. The output power is represented with the same colour code as Figure 5. To show how the RBF learns, Figure 7(a) shows the response for iterations from 1 to 175. In Figure 7(b), the output power for different control strategies described before when the system has already learned is represented.

The learning capability of the neurocontroller is shown in Figure 8. The MSE quickly converges in a few iterations. It is also possible to observe an inflection point around iteration 30. From this point on, the learning speed decreases. Indeed, from them, the MSE hardly varies.

The frequency of the sinusoidal wind speed signal also influences the results. Figure 9 shows the results for different periods (blue, PID; red, neurocontrol). The minimum MSE is reached for the minimum period; from this value, the MSE grows. At a period of 20 s, a local maximum for the neural controller appears, and the same happens at 35 s for the PID. From then on, the error decreases for both controllers. In all the cases, the error is much smaller with the neurocontroller than with the PID.

Table 3 summarizes the results obtained with this experiment. In all cases, the MSE is much smaller with the neural controller. Moreover, it is possible to observe how the response with the neural controller slightly improves when the period is larger than 20 s: the MSE and the variance decrease, and the mean value remains almost unchanged. Meanwhile, for the PID, there are several local minimums and maximums in the MSE and the variance. These upward and downward trends also appear in the MSE and the variance evolution. Nevertheless, the influence of the wind period is not so relevant.

4.1. Influence of the RBF

The influence of the configuration of the RBF neural network in the performance of the controller has also been evaluated. Different number of neurons, values of the parameter, and several limits have been tested. The wind turbine is subjected to a random wind with mean speed between 11.5 and 14 m/s; lambda is 0.0001  1.5, is set to 0.1 in this experiment, the lower and upper limits are set to [−1000, 1000, −400, 400], and the number of neurons varies.

Figure 10 shows the influence of the number of neurons, M, in the evolution of the MSE. The colour is associated to the number of neurons (see the legends). All the curves of this figure have a similar shape, and the main difference is the slope before the inflection point. It is possible to see that the more the neurons, the higher the slopes. In general, the error decreases with the number of neurons until the number is so large that the network does not learn. For example, the MSE with 441 neurons is bigger than that with 121.

To evaluate the influence of the width of the activation function, the configuration of the RBF network is set to the previous values, with the number of neurons M = 9, 25, and 121 (Figure 11, from left to right). The parameter varies from 0.05 to 0.75. In all the cases, the MSE tends to decrease as increases. There is a sharp drop in MSE during the first iterations for values of greater than 0.25. The descent rate also grows with the number of neurons.

Figure 12 shows another perspective of the influence of in the error. It represents the MSE at iteration 200 for different values and different number of neurons. In this figure, it is also possible to see how the MSE decreases with until this parameter is around 0.25, where it starts to grow. This inflexion point does not depend on the number of neurons but the fall before that minimum does depend on the number of neurons (the bigger the number of neurons, the larger the descent rate).

The performance of the neurocontroller can be also adjusted by the modification of the saturation limits of the input space. Different sets of values of have been tested, one varying and another changing . In both experiments, the number of neurons M is set to 121, is 0.25, and 5 iterations are run. When . is changed, the value of is kept constant to 400, and when varies, the limit is fixed to 1000. The corresponding negative boundaries have the same absolute value.

Table 4 shows the variation of MSE, the output power mean, and its variance when is modified from 100 to 1500. The MSE and the mean value decrease with ; however, the variance grows. This may be due to the fact that bigger values of mean bigger variations in the output power and thus larger variance. The influence in the MSE is explained since wider boundaries produce less saturated values and more available information for the learning process. But if the saturation is not reached, too high value of may be counterproductive because the spatial distribution of the neurons makes more neurons to be useless.

Table 5 summarizes the variation of the MSE, the output power mean, and its variance when changes from 50 to 7050. Similar to Table 4, the MSE and the mean value are reduced when increases until a local minimum is reached. It may be also explained for the reduction of the saturated values. However, in this case, the variance also decreases with .

4.2. Influence of the Learning Parameters

Several experiments have been carried out to show the influence of the learning parameters , ,, and . The configuration of the RBF network is M = 121,  = 0.1, and . In the first experiment, 200 iterations have been simulated and the tuple is set to . Figure 13 shows the results for different learning rate . Again, the MSE decreases at each iteration. As expected, the descent rate grows with the learning rate. These results may also be seen in Table 6 (at iteration 5). The output power mean value also decreases with the learning rate. However, the variance grows, and larger values of produce bigger increments in the weights of the neural network (28) and thus bigger variations in the pitch reference and also greater changes in the output power.

In the next experiment, and are set to 0 and is varied. The effect of varying is the same than modifying due to the fact that both are constants that multiply (although the results are different because  = 0.1 in the previous experiment). The results are shown in Table 7. The MSE and the output power mean value decrease with and the variance grows.

Now, and are set to 0 and is varied. The results are shown in Table 8. Initially, the MSE and the output power mean decrease but from , these values grow continuously. An increment of makes the system learn faster. Also, it reacts faster to changes so that MSE can be reduced. However, a very high value amplifies the first ramp that moves the pitch reference to 0. After this point, the system takes a long time to learn as it would need a big downward ramp to recover the initial weight values of the neural network. This also explains why the variance decreases with . After an initial ramp, the values are almost stable producing only small variations in the weights and thus a small variance.

Finally, and are set to 0 and is varied to test its influence. The results are shown in Table 9. Initially, the MSE and the output power mean decrease with until is equal to 0.1; from this value on, they grow continuously. helps the controller to learn how to reduce the steady-state error, so the MSE decreases when increases. However, if this parameter is too high, the controller becomes sluggish and the MSE grows. The variance also increases with since it makes the controller slower, so higher output values are reached. Keeping these high outputs longer generates a greater variance.

4.3. Influence of the Control Period

Once the influence of the neural network configuration and the learning algorithm parameters has been analysed, the last experiment evaluates how the control period affects the performance of the neurocontroller. The number of neurons is set to 11,  = 0.1, , and . Table 10 shows the results at iteration 5 when the control period varies from 10 to 100 ms. If the control sample time is too small, the neural controller reacts to the noisy component of the wind, and this increases the MSE and the variance. On other hand, a very big control period makes the system too slow and also increases the MSE. Therefore, an intermediate value would be the best option. In any case, the performance of the neural controller is much better than the PID response for all the control periods tested.

5. Conclusions and Future Works

In this work, an intelligent wind turbine pitch control strategy is presented, and the influence of the parameters of the neurocontrol systems is analysed. The pitch controller is based on an RBF neural network that learns in an unsupervised way. The control goal is to maintain the output power around its rated value, obtaining the appropriate pitch angle reference. The output power errors are introduced both in the neurocontroller and in the learning algorithm.

Extensive simulation tests have been carried out on a 7 kW wind turbine, varying different network configuration parameters as well as the wind speed. The performance of the neurocontroller is compared with a tuned PID obtaining better results in all the cases.

These experiments have led to draw some interesting conclusions. Among them, we can highlight the small influence of the wind frequency. However, the learning rate grows significantly with the number of neurons. There exists an optimum sigma value different for each number of neurons, between 0.2 and 0.4. Another interesting result is how the gains and as well as accelerate the learning, and, in general, low values of these tuning parameters improve the stability. The control sample time has a clear effect on the system response, making it slower or faster.

In future, it would be desirable to test the proposal on a real prototype of a wind turbine. In addition, it would be interesting to apply this control strategy to a bigger turbine and to see if this control action affects the stability of a floating offshore wind turbine.

Data Availability

The findings of this study have been generated by the equations and parameters cited in the article.

Disclosure

An earlier version of this paper was presented in 15th Int. Conf. on Soft Computing Models in Industrial and Environmental Applications, 2020 [18].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was partially supported by the Spanish Ministry of Science, Innovation and Universities, under MCI/AEI/FEDER Project no. RTI2018-094902-B-C21.