Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 9648126, 16 pages
https://doi.org/10.1155/2018/9648126
Research Article

A Biologically Inspired Framework for the Intelligent Control of Mechatronic Systems and Its Application to a Micro Diving Agent

1RoboTeAM, Robotics & Machine Learning, Federal University of Rio Grande do Norte, Natal/RN, Brazil
2Institute of Mechanics and Ocean Engineering, Hamburg University of Technology, Hamburg, Germany
3Siemens Corporate Technology, 1936 University Avenue, Berkeley, CA 94704, USA

Correspondence should be addressed to Wallace M. Bessa; rb.nrfu.tc@assebmw

Received 16 September 2018; Accepted 26 November 2018; Published 30 December 2018

Academic Editor: Jorge Rivera

Copyright © 2018 Wallace M. Bessa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Mechatronic systems are becoming an intrinsic part of our daily life, and the adopted control approach in turn plays an essential role in the emulation of the intelligent behavior. In this paper, a framework for the development of intelligent controllers is proposed. We highlight that robustness, prediction, adaptation, and learning, which may be considered the most fundamental traits of all intelligent biological systems, should be taken into account within the project of the control scheme. Hence, the proposed framework is based on the fusion of a nonlinear control scheme with computational intelligence and also allows mechatronic systems to be able to make reasonable predictions about its dynamic behavior, adapt itself to changes in the plant, learn by interacting with the environment, and be robust to both structured and unstructured uncertainties. In order to illustrate the implementation of the control law within the proposed framework, a new intelligent depth controller is designed for a microdiving agent. On this basis, sliding mode control is combined with an adaptive neural network to provide the basic intelligent features. Online learning by minimizing a composite error signal, instead of supervised off-line training, is adopted to update the weight vector of the neural network. The boundedness and convergence properties of all closed-loop signals are proved using a Lyapunov-like stability analysis. Numerical simulations and experimental results obtained with the microdiving agent demonstrate the efficacy of the proposed approach and its suitableness for both stabilization and trajectory tracking problems.

1. Introduction

In the last few years we have witnessed the emergence of autonomous and intelligent mechatronic systems. Household robots, such as automated vacuum cleaners and lawn mowers, as well as delivery drones and self-driving cars, just to name a few, are examples of the way intelligent devices are entering our lives. Unlike industrial robots, which operate in a well-structured environment and perform repetitive tasks, autonomous systems have to cope with a high level of uncertainties and adjust their behavior according to both internal and external changes. Thus, although conventional control schemes have been successfully employed to a large variety of robotic manipulators and other industrial applications, they might not represent the most proper choice to deal with mechatronic systems subject to drastic changes in operating conditions. On the other hand, because of their ability to undertake assignments in an environment of uncertainty and imperfect information, soft computing techniques, such as fuzzy logic and artificial neural networks, are commonly used when highly uncertain plants are to be considered [1].

Assuming that intelligent control may be understood as a combination of modern control theory with computational intelligence [2, 3], many authors have adopted this approach to propose their control schemes. Boukens and Boukabou [4], for example, associated an optimal controller with an adaptive neural network to solve the trajectory tracking problem of a mobile robot subject to external disturbances, unmodeled dynamics, and nonholonomic constraints. In [5], a gain scheduling PID controller optimized by a genetic algorithm is designed for an unmanned marine surface vessel. Lu et al. [6] combined an interval type 2 fuzzy neural network with a proportional-derivative controller and applied the resulting scheme to a Delta parallel robot. By means of the Lyapunov stability theory, Mai and Commuri [7] proposed a robust neural network controller for a prosthetic ankle joint with gait recognition. Radial basis function (RBF) neural network is adopted in [8] in order to design a reinforcement learning based adaptive nonlinear optimal controller for an air-breathing hypersonic flight vehicle. In [9], adaptive neural networks are also considered for the reinforcement learning-based control of wheeled mobile robots. A neural-network-based online near-optimal controller for overhead cranes is obtained in [10] by solving the corresponding Hamilton-Jacobi-Bellman equation. Using a network model to mimic those brain structures related to emotions, Lucas et al. [11] presented the so-called Brain Emotional Learning Based Intelligent Controller (BELBIC). This approach was later generalized in [12] by choosing a nonlinear learning module with universal approximation features.

As a result of the essentially nonlinear nature of almost all mechatronic systems, nonlinear control methods have been frequently merged with soft computing approaches. The recursive structure of backstepping, for instance, represents a mighty feature that has been exploited in the development of intelligent controllers. In [13], RBF neural networks are adopted as disturbance observers in a backstepping based approach to handle with a class of strict-feedback nonlinear systems subject to modeling imprecisions and input nonlinearities. Lin et al. [14] used neural networks and a tuning function-based adaptive technique to propose an adaptive backstepping fault-tolerant control algorithm. Gao et al. [15] dealt with fault tolerant-control using neural networks and backstepping too, but, in their case, the controller is suitable for multiple-input and multiple-output (MIMO) systems. In [16], a neural-network-based adaptive backstepping controller is presented for a class of strict-feedback nonlinear state constrained systems subject to input delay. The control of nonaffine nonlinear systems with time-delays is in turn tackled with neural networks in a backstepping way by Wang et al. [17]. By combining backstepping with a neural approximator, Si et al. [18] developed a robust control scheme to cope with nonlinear systems with stochastic disturbances and unknown hysteresis input. Li et al. [19], on the other hand, proposed an adaptive fuzzy controller based on backstepping technique to deal with nonlinear strict-feedback systems subject to input delay and output constraint. By using small-gain approaches, Zhang et al. [20] and Su et al. [21] designed adaptive fuzzy backstepping schemes for uncertain nonlinear systems. Moreover, many other neural and fuzzy backstepping controllers have been recently employed in a variety of applications, ranging from robotic manipulators [2224] to spacecrafts [2527]. However, although promising alternatives have already been proposed [28, 29], the explosion of complexity, which is an inherent issue in backstepping, still poses some difficulties for the implementation of control schemes based on this method.

Dynamic surface control (DSC) emerged as an appealing alternative to overcome the problems related to the explosion of terms. Nevertheless, DSC may lead to instabilities in the closed-loop system if the designed controller cannot properly cope with model uncertainty or external disturbances [30]. In order to surmount this handicap, intelligent approximators can be designed to compensate for modeling inaccuracies within DSC based schemes. In [31] dynamic surface control is combined with a radial basis function neural network and applied to a single link manipulator with a flexible joint. Xu and Sun [32] used fuzzy logic to design an intelligent approximator within a dynamic surface based controller. Dynamic surface control has been also combined with other neural networks based schemes [3336] and with fuzzy inference systems [3740].

Because of its familiarity with the computed torque technique, the robotics community has commonly adopted feedback linearization method together with machine intelligence. By using intelligent algorithms, the major drawback of feedback linearization, namely, the requirement of a precise plant model, can be overcome. Modeling imprecisions and disturbances lead to a significant increase in tracking error when feedback linearization is applied without a compensation scheme [41]. Hence, neural networks [4248], fuzzy logic [4952], and evolutionary computation [5355], for instance, have been used to surpass this limitation and to improve control performance.

The fusion of sliding mode control (SMC) and soft computing is considered a powerful approach to deal with highly uncertain nonlinear systems [56]. Sliding mode controllers can successfully handle bounded uncertainties [57], but the utilization of discontinuous relays may lead to undesirable chattering effects [41]. Even though a properly designed boundary layer has the capacity to completely eliminate chattering, the adoption of this strategy usually results in an inferior tracking performance [58]. In this case, computational intelligence might be used to overcome the shortcomings of smooth sliding controllers [59]. As demonstrated by Bessa and Barrêto [60], adaptive fuzzy inference systems, for example, can be properly embedded within the boundary layer to compensate for modeling imprecisions, for the purpose of enhancing the overall control efficiency. This procedure has already been successfully applied to the dynamic positioning of underwater vehicles [61, 62], vibration suppression in smart structures [63], tracking of unstable periodic orbits in a chaotic pendulum [64], and the control of electrohydraulic servosystems [65].

Considering that artificial neural networks can perform universal approximation [66], neural based schemes have been also merged with sliding mode controllers for the compensation of modeling inaccuracies. In [67], Azzaro and Veiga proposed a sliding mode controller for nonlinear systems with unknown dynamics. By means of the Levenberg-Marquardt algorithm, a neural network was trained to identify the plant dynamics. Fei and Lu [68] presented an adaptive sliding mode controller with a neural estimator and a fractional order sliding manifold. RBF neural networks and a fractional term in the sliding surface are used to improve the control performance. In [69], a double loop recurrent neural network is adopted to approximate unknown plant dynamics within a sliding mode controller. Zhang et al. [70] combined integral sliding modes with a critic neural network to propose an optimal guaranteed cost control scheme for constrained-input nonlinear systems with matched and unmatched disturbances. Yu and Kaynak addressed the integration of SMC with some other soft computing techniques in [56] and discussed further aspects and new trends about this approach in [59].

Although it might be the dominant paradigm in the aforementioned works, could the mere fusion of computational intelligence with modern control theory be enough to define the resulting control scheme as intelligent? Would an autonomous mechatronic system equipped with such a controller be considered an intelligent agent? The answers to these questions rely on our understanding of what intelligence means. Moreover, as we show in this paper, the sense of what defines intelligence can provide a clear path to delineate the scope of intelligent control.

In this work, supported by the most common definitions of natural intelligence, we propose a framework for the intelligent control of mechatronic agents. The intelligent controller should thereby be able to make reasonable predictions about the expected dynamic behavior of the system, adapt itself to changes in the plant and in the environment, learn by experience and by interacting with the environment, and be robust to modeling imprecisions and unexpected perturbations. In order to comply with these four requirements, an adaptive neural network is embedded within the boundary layer of a smooth sliding mode controller. Although several other approaches would be possible within the proposed framework, we think that the adopted strategy has some valuable features, which are discussed below. By means of a Lyapunov-like stability analysis, rigorous proofs for the boundedness and convergence properties of the closed-loop signals are provided. The proposed approach is applied to the intelligent depth control of a microdiving agent. Both numerical and experimental results obtained with the microdiving agent demonstrate the efficacy of the proposed approach and its suitableness for both stabilization and trajectory tracking problems.

The following advantages of the proposed strategy can already be anticipated: (i) robustness to both modeling inaccuracies and external disturbances; (ii) the ability to incorporate prior knowledge of the plant and also to learn by interacting with the environment; (iii) the adoption of online learning, by minimizing a composite error signal, instead of supervised off-line training; (iv) it does not suffer from the curse of dimensionality, since the proposed neural network architecture requires only one neuron in the input layer, instead of all system states or state errors. The last two features not only simplify the design process and the resulting control law, but also minimize the required computational effort. As a matter of fact, the adoption of only one neuron in the input layer allows the computational complexity of the neural network to be exponentially reduced. Moreover, these attributes enable the proposed intelligent controller to be light enough to be employed in mechatronic systems with reduced computational power.

Moreover, it should be highlighted that the intelligent scheme discussed here introduces a significant improvement over our recently proposed scheme, which has been presented in [49]. By merging sliding mode control with artificial neural networks, the new intelligent depth controller provides robustness and enhances the learning capacities of the diving agent.

The rest of the paper is organized as follows. The usual concepts of intelligence and their respective adoption as the basis of a framework for intelligent control are addressed in Section 2. This discussion guides the design of a new intelligent controller for a microdiving agent in Section 3. The performance of the proposed intelligent scheme is evaluated in Section 4 by means of numerical and experimental results. Finally, the concluding remarks are presented in Section 5.

2. A Framework for Intelligent Control

The relevance of control theory to the investigation of complex biological phenomena, especially those related to the nervous system and to natural self-regulating processes, was highlighted seventy years ago in the seminal work by Norbert Wiener [71]. The cybernetic approach proposed by Wiener, which turned out to be very fruitful to both biological and engineering sciences, reinforced the benefits of interdisciplinary research. Motivated by this perspective, in this section, we outline the essential requirements for an intelligent controller based on concepts derived from areas such as neuroscience and psychology.

2.1. What Is Intelligence?

Before starting to discuss the most common definitions, it should be clear that here we are referring to intelligence in a strict sense. In fact, what we usually consider intelligent depends on our expectations: although ordinary people know how to play chess, a baby with the same skill would be considered a genius; a dog, even if it could only understand the rules of the game, would be nothing short of a miracle [72]. Hence, our focus lies on the most fundamental traits of intelligent behavior.

In order to delineate the strict meaning of the term, psychologists and neuroscientists have been trying to find the very essence of intelligence since the beginning of the twentieth century. According to Alfred Binet and Theodore Simon, the fathers of the IQ Test, intelligence represents the faculty of adapting oneself to new circumstances [73]. Thereafter, many researchers endorsed that adaptation is a core feature of intelligence [74]. Nevertheless, it should be noted that purely adaptive behavior seems to be not enough to distinguish an intelligent agent. Otherwise, even bacteria, which have the ability to adapt themselves to the environment [75], could be characterized as intelligent organisms. An agent equipped solely with the adaptation attribute has to adapt itself to each change in its environment, even if this change represents the return to a previously experienced condition. We expect, however, that an intelligent agent should be able to recognize an already experienced state and thus act accordingly, without any readaptation. Such behavior clearly suggests the need for memory and the ability to learn.

According to Dawkins [76], the emergence of memory in living beings represented a great advance, inasmuch as their behavior could now be influenced not only by the immediate past, but also by events in the distant past.

The psychologist Walter Dearborn has soon realized that the capacity to learn and to profit by experience represents a key aspect of intelligence [77]. Donald Hebb, whose work had an enormous influence on the field of artificial neural networks, has also highlighted the importance of learning [78]. As a matter of fact, Hebbian theory is the basis of one of the oldest and most important learning rules in neural networks. Nowadays, there is no doubt that learning is essential for all intelligent beings and it usually occurs throughout their entire lifetime. This process involves the continuous assimilation of new information, followed by the unceasing accommodation of the knowledge basis due to information arrival.

More recently, neuroscientists begun to draw attention to another essential trait of intelligence: the capacity to predict. Wolpert [79] argued that humans and animals, in order to properly activate their muscles and limbs, develop an internal model of their own motor systems and of their environment as well. This internal model is constantly updated by sensory feedback and is used by the central nervous system (CNS) to predict the consequences of its control actions [80]. Edwards et al. [81] propose that the brain creates internal models of the environment to predict imminent sensory input. Llinás [82] emphasized that prediction is the prime objective of the brain and is imperative for intelligent motricity. In [83], Llinás and Roy suggested that the CNS has evolved for the purpose of predicting the outcomes of the impending motion. Moreover, since prediction in this case should be unique, the authors also claim that this feature may have led to the emergence of self-awareness in complex living beings.

Another well-known property of biological systems is robustness [84]. Robust organisms are able to keep their functions even in the case of unexpected perturbations and under diverse environmental conditions. According to Kitano [85], since robustness favors evolvability, robust traits are usually selected by evolution. In addition, these naturally selected strategies can serve as inspiration for the design of artificial intelligent systems [86].

In highly evolved systems, such as human beings, intelligence might be also associated with many other higher predicates: emotion, creativity, consciousness, just to name a few. Nevertheless, robustness, prediction, adaptation, and learning surely represent the most basic characteristics of all intelligent organisms, even the simplest ones. Furthermore, from the control theory perspective, besides being hallmarks of intelligence, these four features are also very convenient, since they can provide a structured way for the development of intelligent controllers.

2.2. Intelligent Control

Now, based on our previous discussion and assuming that an intelligent controller should be able to emulate the most essential traits of intelligence, then it must possess at least the following competencies.

Prediction. Considering that prediction represents the ultimate function of the nervous system, the intelligent controller must be able to incorporate the available knowledge about the plant, in order to anticipate the appropriate control action.

Adaptation. All intelligent organisms use adaptation to accommodate themselves to the needs for survival. Hence, the control scheme should also adapt itself to changes in the plant and in the environment.

Learning. The capacity to learn by experience and acquire knowledge by interacting with the environment is essential to an intelligent system. Within the proposed framework, online learning improves the ability to predict the dynamics of the plant during task execution.

Robustness. Since living beings are able to maintain their functions under diverse environmental circumstances, the intelligent controller must be robust against modeling inaccuracies and external disturbances, with the view of ensuring safe operating conditions.

The topology of an intelligent controller designed within the proposed framework is shown in Figure 1. In the adopted scheme, both prediction and learning blocks are responsible for incorporating knowledge to allow an appropriate control action. However, while the former takes a priori knowledge, available in the design phase, into account, the latter considers a posteriori knowledge, acquired by interacting with the environment. The adaptation block is in charge of adjusting the learning mechanism. Finally, by means of the signals computed using a priori and a posteriori knowledge, the robustness block defines the control action that should be sent to the plant.

Figure 1: Proposed topology for the intelligent controller.

It is important to note that the four attributes considered in the proposed framework can also be used to distinguish the wide range of control schemes. Model-based controllers, for example, are able to consider only a priori knowledge. Purely adaptive schemes can not learn and always have to adapt themselves, even to previously experienced situations. In fact, it is easy to infer that neither adaptive nor model-based controllers could be considered intelligent. Nevertheless, there are several situations where the distinction would not be so straightforward.

Some control approaches that make use of soft computing techniques might eventually be considered intelligent in a broad context. Artificial neural networks employing offline supervised training or fuzzy methods based on heuristic schemes have been widely applied for control purposes and certainly have their application scopes as well. However, a mechatronic agent equipped with such controller could not adapt itself in face of new experiences, neither would it perform online learning by interacting with the environment. Therefore, in a strict sense, that agent would not be able to come up with intelligent behavior.

Lastly, we would like to emphasize that the capacity to emulate intelligence does not necessarily imply better control performance. There are situations, as in the case of industrial robots operating in a well structured environment and performing repetitive tasks, in which a model-based controller would be the most appropriate choice as long as contact dynamics are not predominant. Moreover, robust or adaptive methods can also effectively deal with plants subject to structured uncertainties, i.e., parameter variations. However, when a high level of uncertainties comes on the scene and a mechatronic agent must operate in an environment of imperfect information, intelligent controllers represent the most appropriate strategy. In order to demonstrate this statement, an intelligent controller is designed within the proposed framework in the next section. By means of the numerical and experimental results, we show in Section 4 that the designed intelligent scheme can handle large uncertainties and noisy input signals.

3. Depth Control of a Micro Diving Agent

Compact underwater agents play an important role in oceanographic research and industrial monitoring. They have been deployed in the investigation of ocean phenomena [87] and in the development of underwater sensor networks [88]. However, the design of accurate controllers for this kind of mechatronic system can become very challenging, due to their intrinsic nonlinear behavior and highly uncertain dynamics [49]. Furthermore, they must have a compact size to be hydrodynamically transparent [87], which clearly restricts the dimensions of the embedded electronics and the associated computational power. Therefore, the control law should be able to deal with a challenging control task, even in the face of restricted computational resources.

At this point, the framework discussed in the previous section is employed in the design of an intelligent depth controller for an autonomous diving agent. Autonomous diving agents are very representative of modern intelligent mechatronic systems since they should be able to present real-time learning in order to regulate their depth even considering all uncertainties and perturbations related to the underwater environment. This framework has been also successfully applied to the accurate position tracking of electrohydraulic servomechanism [89].

The adopted microrobot is a revised and improved version of the agent introduced in [49] and was developed at the Institute of Mechanics and Ocean Engineering at Hamburg University of Technology [90]. The microdiving agent and its main components are presented in Figure 2. It is equipped with a Teensy 3.2 microcontroller board, an MS5803-01BA variometer to estimate the depth, two M24M02-DR EEPROM chips, a 3.7 V lithium polymer round cell battery with 1200 mAh, and an ESP8266 WiFi module for out-of-water communication purposes.

Figure 2: Microdiving agent and its main components.

The diving agent is able to adjust its depth by regulating the displaced volume, which can be obtained by means of the actuation mechanism depicted in Figure 2(c). The actuation mechanism is composed by a rack/pinion mechanism, a Faulhaber AM1020-V-3-16 stepper motor with a planetary gear (256:1), and a roller diaphragm. By driving the pinion gear with the stepper motor; the rack displaces the roller diaphragm and changes the volume of the chamber and its net buoyancy force . Considering that, due to constructive aspects, the actuation angle of the stepper motor is restricted to and the net buoyancy force related to can be estimated according towith the water density, the gravitational acceleration, the displaced volume, the area of the roller diaphragm, m the maximum stroke of the rack, the rack/pinion transmission ratio, and rad ( the maximum angular displacement of the stepper motor. For more details concerning the diving agent see [91].

3.1. System Dynamics

The dynamic behavior of an underwater agent must consider both rigid body effects and its interaction with the water, as well as the actuator dynamics. The Navier-Stokes equations might be adopted to represent the fluid-rigid body interaction, but it would be infeasible for real-time control. In order to overcome this problem, we choose to adopt the standard lumped parameters approach [92]. Thus, the vertical motion of the diving agent can be described as follows:where stands for the mass of the agent plus the hydrodynamic added mass, represents the depth position, with the -axis pointing downwards, and is the coefficient of the hydrodynamic quadratic damping.

Considering that the actuator dynamics should not be neglected [49], a first-order low pass filter is used to describe its behaviorwhere is the control signal and is a strictly positive parameter concerning the filter time constant.

Equations (2) and (3) allow the intelligent controller to be numerically evaluated, prior to its implementation in the real diving agent.

3.2. Intelligent Controller

With the view to design the intelligent control scheme, only the dynamical effects that can be rigorously computed in a straightforward manner are taken into consideration. All terms in (2), whose coefficients have to be experimentally estimated and are subject to variations in time, are considered as model uncertainties and neglected in the design of the control law. Thus, for control purposes, (3) is combined with the time derivative of , leading to a third-order differential equation:where stands for the neglected hydrodynamic effects and external disturbances that may occur.

The distinction between the scopes of the two mathematical models should be stressed: while (2) and (3) are adopted to simulate the dynamics of the agent in numerical studies, (4) is considered in the design of the controller that is used in both numerical and experimental evaluations.

The control task is to ensure that the state vector will follow a desired reference , even in the presence of modeling inaccuracies. At this point, the sliding mode method is invoked to comply with the required robustness and to accommodate the prior knowledge about the plant. Hence, a sliding manifold is established in the state space:where is the error vector and is a strictly positive constant.

In order to avoid the undesirable chattering effects, the standard relay term, , is smoothed out by means of a saturation functionwhere is a positive parameter that represents the width of the resulting boundary layer that neighbors the sliding manifold. In addition, the control law is designed to ensure the attractiveness of the boundary layer:with representing the control gain and standing for an estimate of .

Considering that artificial neural networks can be used as universal approximators [66], a single hidden layer network is adopted to estimate the neglected dynamical effects. Furthermore, artificial neural networks have the capacity to fulfill the required learning features.

In order to avoid the issues related to the curse of dimensionality [93], this work considers only the switching variable in the input layerwhere is the weight vector, represents the vector with the activation functions , , and is the total number of neurons in the hidden layer. The adopted layout for the single hidden layer network is depicted in Figure 3.

Figure 3: Single hidden layer network.

It should be noted that if the three state errors (, , ) have been adopted as input, instead of a single switching variable , the computational complexity of the neural network would exponentially grow from to .

Now, the boundedness and the convergence properties of the closed-loop signals are investigated by means of a Lyapunov-like stability analysis. Thus, let a positive-definite function be defined aswhere is a strictly positive constant and , with being the optimal weight vector that minimizes the approximation error .

Considering that , the time derivative of is

By applying the control law (7) to (10), becomes

Since both and are strictly positive parameters, they can be combined into a single adaptation rate .

Hence, by updating according tothe time derivative of becomes

If the control gain is defined according to , with being a strictly positive constant, it follows from (6) that outside the boundary layer, i.e., when , becomes

Integrating both sides of (14) yields

Since the absolute value function is uniformly continuous, it follows from Barbalat’s lemma [41] that the proposed intelligent controller ensures the convergence of the tracking error to the boundary layer.

Moreover, considering that may be rewritten as , it can be verified that, inside the boundary layer, one has

Multiplying (16) by , one get

Integrating (17) between 0 and givesor conveniently rewritten aswith being a constant value.

Integrating (19) between and yieldswith also being a constant value.

Dividing (20) by , it can be verified, for , that

By imposing (21) to (19) and dividing it by , it follows, for , that

Furthermore, applying (21) and (22) to (16) gives

Therefore, the intelligent controller defined by (7), (8), and (12) ensures the exponential convergence of the tracking error to the closed region .

It is important to note that the adopted approach is in total agreement with the framework proposed for intelligent control. The control law (7) not only yields the required robustness, but also accommodates both prior and acquired information. A priori knowledge is provided by (4), which represents our knowledge on the plant during the design phase. Equation (8), on the other hand, gives a posteriori knowledge, which stands for everything that is learned online by means of the interaction with the environment. Finally, (12) enables the controller to adapt itself online by adjusting the weights of the neural network. The online nature of the proposed adaptation law motivates the adoption of locally supported activation functions, in order to ensure the retention of learned experiences [94]. The block diagram of the proposed intelligent controller is presented in Figure 4.

Figure 4: Block diagram of the intelligent controller.

4. Numerical and Experimental Results

The efficacy of the proposed control scheme is now evaluated by means of both numerical and experimental investigations.

4.1. Numerical Studies

In order to simulate the dynamic behavior of the diving agent, the fourth-order Runge-Kutta method is employed for time integration. The sampling rates are 1 kHz for the control system and 10 kHz for the dynamical model. Model parameters are assumed as kg, kg, N /, and Hz. Control parameters are set to , , and . It should be noted that, according to (7), neither the added hydrodynamic mass nor the quadratic damping is taken into account in the control law. In this case, both hydrodynamic effects are considered as modeling inaccuracies.

Regarding the adopted neural network, the weight vector is initialized as and updated at each iteration step according to (12). The adaptation rate is initially set to . As depicted in Figure 5, two different types of locally supported activation functions are here taken into consideration: Gaussian and Triangular. In both cases, seven neurons are adopted in the hidden layer. The center of each neuron is defined in accordance with the width of the boundary layer.

Figure 5: Adopted activation functions.

In order to demonstrate the robustness of the proposed intelligent scheme against modeling imprecisions, as well as its almost indifference to the chosen neuron type, the controller is implemented with both Gaussian and Triangular activation functions. The diving agent is released on the surface and should dive to a depth of m. The obtained results are presented in Figure 6. With the view to facilitate the comparison with the experimental results, the control action is already presented by means of the actuation angle, which is computed according to (1).

Figure 6: Depth regulation with both Gaussian and Triangular activation functions, .

As observed in Figure 6, the performance associated with the two different types of neurons is quite similar. The Gaussian type allows the agent to dive a little faster (see Figure 6(a)), but at the cost of a slightly larger control action at the very beginning (see Figure 6(b)). In both cases, the controller is able to drive the agent to the desired depth, regardless of the presence of structured and unstructured uncertainties, respectively. It is important to note that, by setting initially to zero, the neural network has no prior knowledge about the plant and should learn online and from scratch how to compensate for modeling inaccuracies.

The relevance of the ability to learn can be highlighted by confronting the proposed intelligent approach with an adaptive sliding mode controller. For comparison purposes, the intelligent scheme can be easily converted to a purely adaptive law by adopting a single neuron in the hidden layer and defining its activation function according to . Thus, by evoking the Euler method, the purely adaptive scheme becomes , with being the sampling period of the controller, standing for the estimation at the -th iteration step, and . Figures 7 and 8 present the results obtained, respectively, with and , for both intelligent and adaptive controllers. Gaussian activation functions are taken into account within the intelligent approach.

Figure 7: Depth regulation with both adaptive and intelligent schemes, .
Figure 8: Depth regulation with both adaptive and intelligent schemes, .

By inspecting Figures 7 and 8, it can be verified that a merely adaptive scheme does not represent the most suitable choice to cope with unstructured uncertainties. Regardless of the adopted adaptation rate, or , the intelligent controller is able to stabilize the agent at the desired depth, Figures 7(a) and 8(a), with smooth control actions, Figures 7(b) and 8(b). The adaptive sliding mode controller, however, presents an impaired performance for , Figure 7(a), even considering the successive attempts of the stepper motor to regulate the displaced volume, Figure 7(b). Moreover, it should be made clear that the inability of the adaptive control law is not a mere matter of parameter settings. Figure 8 shows that the increase in the adaptation rate to can improve the capacity of the purely adaptive scheme to stabilize the agent, but at the cost of a prohibitive control action. The high control activity related to the adaptive scheme (see Figure 8(b)) would lead to excessive power consumption by the stepper motor, which would compromise the agent’s battery resources. In addition, repeatedly applied loads may induce mechanical failures due to fatigue.

The power consumption related to each control scheme can be estimated by means of the angular velocity of the motor. Considering that the rotational energy at the motor shaft is proportional to its squared angular velocity, the total energy spent can be estimated according to

Table 1 presents the energy expenditure for the adaptive and intelligent controllers, respectively, and , associated with the adaptation rates evaluated in Figures 7 and 8.

Table 1: Energy expenditure for both adaptive and intelligent controllers.

According to Table 1, the intelligent approach is much more efficient than the merely adaptive scheme. When compared with the intelligent scheme, the energy spent with the adaptive controller is almost sixty times greater for and more than a hundred and sixty times greater for for . This is because the adaptive control law lacks the ability to learn from past experiences and needs to constantly adjust itself. The intelligent scheme, on the other hand, has the capacity to accommodate knowledge, by approximating the modeling inaccuracies as a function of sigma. On this basis, the proposed intelligent controller learns, through the iterations, how to properly compensate for uncertainties as a function of the distance between the state errors and the sliding manifold. The convergence of the neural network according to the iteration step is shown in Figure 9.

Figure 9: Convergence of the neural network with respect to the number of iterations, .

Considering that the controller has a sampling rate of 1 kHz, it can be verified in Figure 9 that after 5 seconds, i.e., 5000 iterations, the output of the neural network converges to a function of the sliding variable .

4.2. Experimental Validation

Now, the intelligent controller defined by (7), (8), and (12) is experimentally evaluated. The control scheme was implemented in the Teensy 3.2 microcontroller board, following a procedural programming paradigm in C++. The main loop runs at 40 Hz and consists of pressure and temperature sensor queries for determining water depth, which in turn is used to provide feedback for control command computations and stepper motor actuation. Data is stored in EEPROM and gathered after each experiment using the WiFi module. All trials were carried out in the wave tank of the Institute of Mechanics and Ocean Engineering, Hamburg University of Technology (Figure 10).

Figure 10: Wave tank at the Institute of Mechanics and Ocean Engineering, Hamburg University of Technology.

Since the proposed scheme requires the knowledge not only of the depth but also of the depths first and second time derivatives, state observers have to be implemented. Considering their appealing finite time convergence property [57], sliding mode observers have been adopted to estimate both diving velocity and diving acceleration. Hence, following Shtessel et al. [57], the estimated state can be obtained from a known input signal according towhere , , and are strictly positive parameters.

Here, a cascade implementation of (25) and (26) is used to estimate and by means of the measured depth . The choice of the observer parameters is straightforward and follows [90]: , , and . Gerrit [90] evaluates the influence of the parameters on both estimation performance and convergence time of the observer, confirming its robustness to parameter variations and suitability to the microdiving agent. All control parameters are considered as before. Figure 11 shows the experimental evaluation of the proposed intelligent controller for the depth regulation at m.

Figure 11: Experimental depth regulation with the intelligent controller, .

As observed in Figure 11, the intelligent controller is able to stabilize the diving agent at the desired depth with a smooth control action that does not compromise neither the integrity of its mechanical components, nor the battery capacity.

At this point, in order to demonstrate the ability of the proposed intelligent scheme to allow not only depth regulation but also trajectory tracking, the diving agent is set to follow a sinusoidal depth profile. This is a very useful feature for a sensor platform that is intended to monitor environmental values in a liquid column. Figure 12 presents the experimentally obtained results.

Figure 12: Experimental depth tracking with the intelligent controller, .

By appraising Figure 12, it can be ascertained that the intelligent controller allows the diving agent to track the desired trajectory, even considering that the initial states and the initial desired states do not match, . Moreover, as observed in Figure 12(b), the control action remains smooth throughout the tracking. Regarding the adopted sliding mode observer, it can be verified that velocities and accelerations, respectively (Figures 12(c) and 12(d)), are estimated with good confidence. Due to the double differentiation process related to the cascade implementation, the estimated acceleration signal becomes slightly noisy (Figure 12(d)). Even though, notwithstanding the presence of noisy signals, the proposed controller is able to track the reference (Figure 12(a)), without harming the smoothness of the control signal (Figure 12(b)).

It should be emphasized that the designed intelligent depth controller represents an important improvement over our recently proposed scheme, which is discussed in [49]. By means of the combination of sliding mode control with artificial neural networks, the new intelligent depth controller’s contribution is twofold: it provides robustness and enhances the learning capacities of the diving agent.

Finally, a self-organizing mechanism might also be implemented to automatically set the architecture of the neural network. However, despite an eventual improvement in tracking performance, this would certainly increase the computational requirements of the embedded electronics. Thus, in order to avoid compromising the implementation in embedded systems, the trade-off between computational costs and performance improvement must be taken into account during the design phase of the mechatronic system.

5. Concluding Remarks

In this work, a framework for the intelligent control of mechatronic systems is presented. We have shown that the intelligent controller derived from this framework is able to emulate the essential traits of natural intelligence, namely, robustness, prediction, adaptation, and learning. In order to comply with these required attributes, we propose the design of an intelligent approach by embedding a neural network within the boundary layer of a smooth sliding mode controller. Thereby, the sliding mode method provides a robust structure to accommodate both a priori and a posteriori knowledge. The introduction of a neural network with an online adaptation scheme allows the controller to learn by interacting with the environment, without the necessity of offline training. It should be emphasized that the adoption of only one input not only simplifies the resulting control law but also avoids the issues related to the curse of dimensionality. By keeping the complexity low, the control scheme is allowed to be employed within the limited embedded hardware of most common mechatronic systems. All these features have granted the proposed approach to be taken into consideration within the design of a new depth controller for a diving agent. By means of numerical and experimental results, it has been demonstrated that the designed control law can cope with a plant subject to large unstructured uncertainties, incomplete information, and noisy input signals. In view of the robustness, prediction, adaptation, and learning capacities, granted by the combination of artificial neural networks with sliding mode control, the resulting intelligent scheme provides a stronger improved tracking performance over a purely adaptive one and allows power consumption at about 98% lower.

Data Availability

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Alexander von Humboldt Foundation [3.2-BRA/1159879 STPCAPES]; the Brazilian Coordination for the Improvement of Higher Education Personnel [BEX 8136/14-9]; the Brazilian National Council for Scientific and Technological Development [308429/2017-6]; and the German Research Foundation [Kr752/33-1, Kr752/36-1].

References

  1. M. Akbarzadeh-T, K. Kumbla, E. Tunstel, and M. Jamshidi, “Soft computing for autonomous robotic systems,” Computers & Electrical Engineering, vol. 26, no. 1, pp. 5–32, 2000. View at Publisher · View at Google Scholar
  2. P. J. Antsaklis, K. M. Passino, and S. J. Wang, “Towards intelligent autonomous control systems: Architecture and fundamental issues,” Journal of Intelligent & Robotic Systems, vol. 1, no. 4, pp. 315–342, 1989. View at Publisher · View at Google Scholar
  3. M. Jamshidi, “Tools for intelligent control: fuzzy controllers, neural networks and genetic algorithms,” Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences, vol. 361, no. 1809, pp. 1781–1808, 2003. View at Publisher · View at Google Scholar
  4. M. Boukens and A. Boukabou, “Design of an intelligent optimal neural network-based tracking controller for nonholonomic mobile robot systems,” Neurocomputing, vol. 226, pp. 46–57, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. J. M. Larrazabal and M. S. Peñas, “Intelligent rudder control of an unmanned surface vessel,” Expert Systems with Applications, vol. 55, pp. 106–117, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. X. Lu, Y. Zhao, and M. Liu, “Self-learning interval type-2 fuzzy neural network controllers for trajectory control of a Delta parallel robot,” Neurocomputing, vol. 283, pp. 107–119, 2018. View at Publisher · View at Google Scholar
  7. A. Mai and S. Commuri, “Intelligent control of a prosthetic ankle joint using gait recognition,” Control Engineering Practice, vol. 49, pp. 1–13, 2016. View at Publisher · View at Google Scholar
  8. L. Hu, R. Li, T. Xue, and Y. Liu, “Neuro-adaptive tracking control of a hypersonic flight vehicle with uncertainties using reinforcement synthesis,” Neurocomputing, vol. 285, pp. 141–153, 2018. View at Publisher · View at Google Scholar
  9. S. Li, L. Ding, H. Gao, C. Chen, Z. Liu, and Z. Deng, “Adaptive neural network tracking control-based reinforcement learning for wheeled mobile robots with skidding and slipping,” Neurocomputing, vol. 283, pp. 20–30, 2018. View at Publisher · View at Google Scholar
  10. D. Wang, H. He, and D. Liu, “Intelligent Optimal Control With Critic Learning for a Nonlinear Overhead Crane System,” IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 2932–2940, 2018. View at Publisher · View at Google Scholar
  11. C. Lucas, D. Shahmirzadi, and N. Sheikholeslami, “Introducing BELBIC: brain emotional learning based intelligent controller,” Intelligent Automation and Soft Computing, vol. 10, no. 1, pp. 11–22, 2004. View at Publisher · View at Google Scholar · View at Scopus
  12. E. Lotfi and A. A. Rezaee, “Generalized BELBIC,” Neural Computing and Applications. View at Publisher · View at Google Scholar
  13. X. Wang, X. Yin, and F. Shen, “Robust adaptive neural tracking control for a class of nonlinear systems with unmodeled dynamics using disturbance observer,” Neurocomputing, vol. 292, pp. 49–62, 2018. View at Publisher · View at Google Scholar
  14. X. Lin, H. Dong, and X. Yao, “Tuning function-based adaptive backstepping fault-tolerant control for nonlinear systems with actuator faults and multiple disturbances,” Nonlinear Dynamics, vol. 91, no. 4, pp. 2227–2239, 2018. View at Publisher · View at Google Scholar
  15. H. Gao, Y. Song, and C. Wen, “Backstepping design of adaptive neural fault-tolerant control for MIMO nonlinear systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 11, pp. 2605–2613, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  16. D. Li, Y. Liu, S. Tong, C. L. Chen, and D. Li, “Neural Networks-Based Adaptive Control for Nonlinear State Constrained Systems With Input Delay,” IEEE Transactions on Cybernetics, pp. 1–10. View at Publisher · View at Google Scholar
  17. H. Wang, W. Sun, and P. X. Liu, “Adaptive Intelligent Control of Nonaffine Nonlinear Time-Delay Systems With Dynamic Uncertainties,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1474–1485, 2017. View at Publisher · View at Google Scholar
  18. W. Si, X. Dong, and F. Yang, “Adaptive neural prescribed performance control for a class of strict-feedback stochastic nonlinear systems with hysteresis input,” Neurocomputing, vol. 251, pp. 35–44, 2017. View at Publisher · View at Google Scholar
  19. H. Li, L. Wang, and H. Du, “Adaptive fuzzy backstepping tracking control for strict-feedback systems with input delay,” IEEE Transactions on Fuzzy Systems, 2016. View at Publisher · View at Google Scholar
  20. X. Zhang, X. Liu, and Y. Li, “Adaptive fuzzy tracking control for nonlinear strict-feedback systems with unmodeled dynamics via backstepping technique,” Neurocomputing, vol. 235, pp. 182–191, 2017. View at Publisher · View at Google Scholar · View at Scopus
  21. H. Su, T. Zhang, and W. Zhang, “Fuzzy adaptive control for SISO nonlinear uncertain systems based on backstepping and small-gain approach,” Neurocomputing, vol. 238, pp. 212–226, 2017. View at Publisher · View at Google Scholar
  22. L. A. Vazquez, F. Jurado, C. E. Castaneda, and V. Santibanez, “Real-Time Decentralized Neural Control via Backstepping for a Robotic Arm Powered by Industrial Servomotors,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 2, pp. 419–426, 2018. View at Publisher · View at Google Scholar
  23. W. Chang, S. Tong, and Y. Li, “Adaptive fuzzy backstepping output constraint control of flexible manipulator with actuator saturation,” Neural Computing and Applications, vol. 28, no. S1, pp. 1165–1175, 2017. View at Publisher · View at Google Scholar
  24. B. Baigzadehnoe, Z. Rahmani, A. Khosravi, and B. Rezaie, “On position/force tracking control problem of cooperative robot manipulators using adaptive fuzzy backstepping approach,” ISA Transactions®, vol. 70, pp. 432–446, 2017. View at Publisher · View at Google Scholar
  25. J. Niu, F. Chen, and G. Tao, “Nonlinear fuzzy fault-tolerant control of hypersonic flight vehicle with parametric uncertainty and actuator fault,” Nonlinear Dynamics, vol. 92, no. 3, pp. 1299–1315, 2018. View at Publisher · View at Google Scholar
  26. G. Ma, C. Chen, Y. Lyu, and Y. Guo, “Adaptive Backstepping-Based Neural Network Control for Hypersonic Reentry Vehicle With Input Constraints,” IEEE Access, vol. 6, pp. 1954–1966, 2018. View at Publisher · View at Google Scholar
  27. X. Cao, P. Shi, Z. Li, and M. Liu, “Neural-Network-Based Adaptive Backstepping Control With Application to Spacecraft Attitude Regulation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 9, pp. 4303–4313, 2018. View at Publisher · View at Google Scholar
  28. Y. Pan, T. Sun, Y. Liu, and H. Yu, “Composite learning from adaptive backstepping neural network control,” Neural Networks, vol. 95, pp. 134–142, 2017. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Hou and S. Tong, “Command filter-based adaptive fuzzy backstepping control for a class of switched nonlinear systems,” Fuzzy Sets and Systems, vol. 314, pp. 46–60, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  30. B. Song and J. K. Hedrick, Dynamic Surface Control of Uncertain Nonlinear Systems: An LMI Approach, Springer, London, UK, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  31. X. Liu, C. Yang, Z. Chen, M. Wang, and C. Su, “Neuro-adaptive observer based control of flexible joint robot,” Neurocomputing, 2017. View at Publisher · View at Google Scholar
  32. B. Xu and F. Sun, “Composite intelligent learning control of strict-feedback systems with disturbance,” IEEE Transactions on Cybernetics, vol. PP, no. 99, pp. 1–12, 2017. View at Publisher · View at Google Scholar
  33. X. Shi, C. Lim, P. Shi, and S. Xu, “Adaptive Neural Dynamic Surface Control for Nonstrict-Feedback Systems With Output Dead Zone,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, pp. 5200–5213, 2018. View at Publisher · View at Google Scholar
  34. T. Zhang, N. Wang, Q. Wang, and Y. Yi, “Adaptive neural control of constrained strict-feedback nonlinear systems with input unmodeled dynamics,” Neurocomputing, vol. 272, pp. 596–605, 2018. View at Publisher · View at Google Scholar
  35. T. Zhang, M. Xia, Y. Yi, and Q. Shen, “Adaptive Neural Dynamic Surface Control of Pure-Feedback Nonlinear Systems With Full State Constraints and Dynamic Uncertainties,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, pp. 2378–2387, 2017. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Wang and C. Wang, “Learning from adaptive neural dynamic surface control of strict-feedback systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 6, pp. 1247–1259, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. H. Dong, S. Gao, B. Ning, T. Tang, Y. Li, and K. P. Valavanis, “Error-Driven Nonlinear Feedback Design for Fuzzy Adaptive Dynamic Surface Control of Nonlinear Systems With Prescribed Tracking Performance,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, pp. 1–11. View at Publisher · View at Google Scholar
  38. Y. Cui, H. Zhang, Q. Qu, and C. Luo, “Synthetic adaptive fuzzy tracking control for MIMO uncertain nonlinear systems with disturbance observer,” Neurocomputing, vol. 249, pp. 191–201, 2017. View at Publisher · View at Google Scholar
  39. N. Wang, S. Tong, and Y. Li, “Observer-based adaptive fuzzy dynamic surface control of non-linear non-strict feedback system,” IET Control Theory & Applications, vol. 11, no. 17, pp. 3115–3121, 2017. View at Publisher · View at Google Scholar
  40. X. Zhang, Z. Xu, C. Su et al., “Fuzzy Approximator Based Adaptive Dynamic Surface Control for Unknown Time Delay Nonlinear Systems With Input Asymmetric Hysteresis Nonlinearities,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, pp. 2218–2232, 2017. View at Publisher · View at Google Scholar
  41. J.-J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, New Jersey, 1991.
  42. B. Rahmani and M. Belkheiri, “Adaptive neural network output feedback control for flexible multi-link robotic manipulators,” International Journal of Control, pp. 1–15, 2018. View at Publisher · View at Google Scholar
  43. C. Lin, T. S. Li, and C. Chen, “Feedback linearization and feedforward neural network control with application to twin rotor mechanism,” Transactions of the Institute of Measurement and Control, vol. 40, no. 2, pp. 351–362, 2016. View at Publisher · View at Google Scholar
  44. K. Shojaei and M. M. Arefi, “On the neuro-adaptive feedback linearising control of underactuated autonomous underwater vehicles in three-dimensional space,” IET Control Theory & Applications, vol. 9, no. 8, pp. 1264–1273, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  45. J. Fang, R. Yin, and X. Lei, “An adaptive decoupling control for three-axis gyro stabilized platform based on neural networks,” Mechatronics, vol. 27, pp. 38–46, 2015. View at Publisher · View at Google Scholar
  46. N. Nikdel and M. A. Badamchizadeh, “Design and implementation of neural controllers for shape memory alloy–actuated manipulator,” Journal of Intelligent Material Systems and Structures, vol. 26, no. 1, pp. 20–28, 2014. View at Publisher · View at Google Scholar
  47. M. Moradi and H. Malekizade, “Neural network identification based multivariable feedback linearization robust control for a two-link manipulator,” Journal of Intelligent & Robotic Systems, vol. 72, no. 2, pp. 167–178, 2013. View at Publisher · View at Google Scholar · View at Scopus
  48. J. M. Fernandes, M. C. Tanaka, R. C. Freire Júnior, and W. M. Bessa, “Feedback Linearization with a Neural Network Based Compensation Scheme,” in Intelligent Data Engineering and Automated Learning - IDEAL 2012, vol. 7435 of Lecture Notes in Computer Science, pp. 594–601, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. View at Publisher · View at Google Scholar
  49. W. M. Bessa, E. Kreuzer, J. Lange, M. Pick, and E. Solowjow, “Design and Adaptive Depth Control of a Micro Diving Agent,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 1871–1877, 2017. View at Publisher · View at Google Scholar
  50. Y. Zhang, G. Tao, and M. Chen, “Relative Degrees and Adaptive Feedback Linearization Control of T–S Fuzzy Systems,” IEEE Transactions on Fuzzy Systems, vol. 23, no. 6, pp. 2215–2230, 2015. View at Publisher · View at Google Scholar
  51. G. Li, B. Li, D. Wu, J. Du, and G. Yang, “Feedback linearization-based self-tuning fuzzy proportional integral derivative control for atmospheric pressure simulator,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 228, no. 6, pp. 385–392, 2014. View at Publisher · View at Google Scholar
  52. M. C. Tanaka, J. M. de Macedo, Fernandes., and W. M. Bessa, “Feedback linearization with fuzzy compensation for uncertain nonlinear systems,” International Journal of Computers, Communications & Control, vol. 80, no. 5, pp. 736–743, 2013. View at Google Scholar
  53. Y. Soufi, S. Kahla, and M. Bechouat, “Feedback linearization control based particle swarm optimization for maximum power point tracking of wind turbine equipped by PMSG connected to the grid,” International Journal of Hydrogen Energy, vol. 41, no. 45, pp. 20950–20955, 2016. View at Publisher · View at Google Scholar
  54. J. O. Pedro, M. Dangor, O. A. Dahunsi, and M. M. Ali, “Intelligent feedback linearization control of nonlinear electrohydraulic suspension systems using particle swarm optimization,” Applied Soft Computing, vol. 24, pp. 50–62, 2014. View at Publisher · View at Google Scholar
  55. J. L. Chen and W. Chang, “Feedback linearization control of a two-link robot using a multi-crossover genetic algorithm,” Expert Systems with Applications, vol. 36, no. 2, pp. 4154–4159, 2009. View at Publisher · View at Google Scholar
  56. X. H. Yu and O. Kaynak, “Sliding-mode control with soft computing: a survey,” IEEE Transactions on Industrial Electronics, vol. 56, no. 9, pp. 3275–3285, 2009. View at Publisher · View at Google Scholar · View at Scopus
  57. Y. Shtessel, C. Edwards, L. Fridman, and A. Levant, Sliding Mode Control and Observation, Springer, New York, NY, USA, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  58. W. M. Bessa, “Some remarks on the boundedness and convergence properties of smooth sliding mode controllers,” International Journal of Automation and Computing, vol. 6, no. 2, pp. 154–158, 2009. View at Publisher · View at Google Scholar · View at Scopus
  59. X. Yu and O. Kaynak, “Sliding Mode Control Made Smarter: A Computational Intelligence Perspective,” IEEE Systems, Man, and Cybernetics Magazine, vol. 3, no. 2, pp. 31–34, 2017. View at Publisher · View at Google Scholar
  60. W. M. Bessa and R. S. Barrêto, “Adaptive fuzzy sliding mode control of uncertain nonlinear systems,” Sba: Controle & Automação Sociedade Brasileira de Automatica, vol. 21, no. 2, pp. 117–126, 2010. View at Publisher · View at Google Scholar
  61. W. M. Bessa, M. S. Dutra, and E. Kreuzer, “Depth control of remotely operated underwater vehicles using an adaptive fuzzy sliding mode controller,” Robotics and Autonomous Systems, vol. 56, no. 8, pp. 670–677, 2008. View at Publisher · View at Google Scholar · View at Scopus
  62. W. M. Bessa, M. S. Dutra, and E. Kreuzer, “An adaptive fuzzy sliding mode controller for remotely operated underwater vehicles,” Robotics and Autonomous Systems, vol. 58, no. 1, pp. 16–26, 2010. View at Publisher · View at Google Scholar
  63. W. Bessa, A. de Paula, and M. Savi, “Adaptive fuzzy sliding mode control of smart structures,” The European Physical Journal Special Topics, vol. 222, no. 7, pp. 1541–1551, 2013. View at Publisher · View at Google Scholar
  64. W. Bessa, A. de Paula, and M. Savi, “Adaptive fuzzy sliding mode control of a chaotic pendulum with noisy signals,” ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik, vol. 94, no. 3, pp. 256–263, 2014. View at Publisher · View at Google Scholar
  65. W. M. Bessa, M. S. Dutra, and E. Kreuzer, “Sliding mode control with adaptive fuzzy dead-zone compensation of an electro-hydraulic servo-system,” Journal of Intelligent & Robotic Systems, vol. 58, no. 1, pp. 3–16, 2010. View at Publisher · View at Google Scholar · View at Scopus
  66. F. Scarselli and A. Chung Tsoi, “Universal Approximation Using Feedforward Neural Networks: A Survey of Some Existing Methods, and Some New Results,” Neural Networks, vol. 11, no. 1, pp. 15–37, 1998. View at Publisher · View at Google Scholar
  67. J. E. Azzaro and R. A. Veiga, “Sliding mode controller with neural network identification,” IEEE Latin America Transactions, vol. 13, no. 12, pp. 3754–3757, 2015. View at Publisher · View at Google Scholar
  68. J. Fei and C. Lu, “Adaptive fractional order sliding mode controller with neural estimator,” Journal of The Franklin Institute, vol. 355, no. 5, pp. 2369–2391, 2018. View at Publisher · View at Google Scholar
  69. J. Fei and C. Lu, “Adaptive sliding mode control of dynamic systems using double loop recurrent neural network structure,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 4, pp. 1275–1286, 2017. View at Publisher · View at Google Scholar · View at Scopus
  70. H. Zhang, Q. Qu, G. Xiao, and Y. Cui, “Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 6, pp. 2112–2126, 2018. View at Publisher · View at Google Scholar
  71. N. Wiener, Cybernetics Or Control and Communication in the Animal and the Machine, John Wiley & Sons, Inc., New York, USA, 1948. View at MathSciNet
  72. R. Pfeifer and C. Scheier, Understanding Intelligence, MIT Press, Cambridge, MA, USA, 2001.
  73. A. Binet and T. Simon, The Development of Intelligence in Children, Williams & Wilkins, Baltimore, 1916.
  74. R. Sternberg, Beyond IQ: A Triarchic Theory of Human Intelligence, Cambridge University Press, New York, 1985.
  75. H. Roy, K. Dare, and M. Ibba, “Adaptation of the bacterial membrane to changing environments using aminoacylated phospholipids,” Molecular Microbiology, vol. 71, no. 3, pp. 547–550, 2009. View at Publisher · View at Google Scholar
  76. R. Dawkins, The Selfish Gene, Oxford University Press, Oxford, UK, 1976.
  77. W. F. Dearborn, “Intelligence and Its Measurement: A Symposium--XII,” Journal of Educational Psychology, vol. 12, no. 4, pp. 210–212, 1921. View at Publisher · View at Google Scholar · View at Scopus
  78. D. O. Hebb, The Organization of Behavior, John Wiley & Sons, Inc, New York, 1949.
  79. D. M. Wolpert, “Computational approaches to motor control,” Trends in Cognitive Sciences, vol. 1, no. 6, pp. 209–216, 1997. View at Publisher · View at Google Scholar
  80. S. J. Blakemore, S. J. Goodbody, and D. M. Wolpert, “Predicting the consequences of our own actions: the role of sensorimotor context estimation,” The Journal of Neuroscience, vol. 18, no. 18, pp. 7511–7518, 1998. View at Publisher · View at Google Scholar · View at Scopus
  81. G. Edwards, P. Vetter, F. McGruer, L. S. Petro, and L. Muckli, “Predictive feedback to V1 dynamically updates with sensory input,” Scientific Reports, vol. 7, no. 1, 2017. View at Publisher · View at Google Scholar
  82. R. R. Llinás, I of the Vortex: From Neurons to Self, The MIT Press, Cambridge, MA, USA, 2001. View at Publisher · View at Google Scholar
  83. R. R. Llinas and S. Roy, “The 'prediction imperative' as the basis for self-awareness,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1521, pp. 1301–1307, 2009. View at Publisher · View at Google Scholar
  84. A. Roberts and K. Tregonning, “The robustness of natural systems,” Nature, vol. 288, no. 5788, pp. 265-266, 1980. View at Publisher · View at Google Scholar
  85. H. Kitano, “Biological robustness,” Nature Reviews Genetics, vol. 5, no. 11, pp. 826–837, 2004. View at Publisher · View at Google Scholar · View at Scopus
  86. A. Schuster, Robust Intelligent Systems, Springer, London, 2008. View at Publisher · View at Google Scholar
  87. J. S. Jaffe, P. J. Franks, P. L. Roberts et al., “A swarm of autonomous miniature underwater robot drifters for exploring submesoscale ocean dynamics,” Nature Communications, vol. 8, p. 14189, 2017. View at Publisher · View at Google Scholar
  88. R. W. Coutinho, A. Boukerche, L. F. Vieira, and A. A. Loureiro, “Geographic and opportunistic routing for underwater sensor networks,” Institute of Electrical and Electronics Engineers. Transactions on Computers, vol. 65, no. 2, pp. 548–561, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  89. J. dos Santos and W. Bessa, “Intelligent control for accurate position tracking of electrohydraulic actuators,” IEEE Electronics Letters. View at Publisher · View at Google Scholar
  90. G. Brinkmann, Machine learning for robot control – Concept, development, and realization of a dive agent. Master thesis, Institute of Mechanics and Ocean Engineering, Hamburg University of Technology, 2017.
  91. G. Brinkmann, W. M. Bessa, D. Duecker, E. Kreuzer, and E. Solowjow, “Reinforcement Learning of Depth Stabilization with a Micro Diving Agent,” in Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–7, Brisbane, QLD, May 2018. View at Publisher · View at Google Scholar
  92. T. I. Fossen, Guidance and Control of Ocean Vehicles, John Wiley & Sons, Inc, Chichester, 1994.
  93. R. Bellman, Adaptive Control Processes: A Guided Tour, Princeton University Press, New Jersey, NJ, USA, 1961. View at MathSciNet
  94. J. A. Farrell and M. M. Polycarpou, Adaptive Approximation Based Control: Unifying Neural, Fuzzy and Traditional Adaptive Approximation Approaches, Wiley, Hoboken, NJ, USA, 2006. View at Publisher · View at Google Scholar · View at Scopus