Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2017, Article ID 6290646, 11 pages
https://doi.org/10.1155/2017/6290646
Research Article

Centralized and Decentralized Data-Sampling Principles for Outer-Synchronization of Fractional-Order Neural Networks

Hubei Normal University, Hubei 435002, China

Correspondence should be addressed to Jin-E Zhang; moc.361@50212068gnahz

Received 24 December 2016; Revised 6 February 2017; Accepted 21 February 2017; Published 8 March 2017

Academic Editor: Olfa Boubaker

Copyright © 2017 Jin-E Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper aims to investigate the outer-synchronization of fractional-order neural networks. Using centralized and decentralized data-sampling principles and the theory of fractional differential equations, sufficient criteria about outer-synchronization of the controlled fractional-order neural networks are derived for structure-dependent centralized data-sampling, state-dependent centralized data-sampling, and state-dependent decentralized data-sampling, respectively. A numerical example is also given to illustrate the superiority of theoretical results.

1. Introduction

Fractional operator has become visible in application domains [115]. As the demanding performance expectations with uncertainty, fractional operator offers more degrees of freedom to designers to meet some predefined performance indexes. After gradually recognizing the importance of fractional operator, it is found that the description of fractional-order model is more accurate and totally different from that of the corresponding integer-order model. As a direct application, the characteristic of fractional-order model can be used to identify possible behavior of electrical signals from neurons. In physical implementation of neurodynamic systems, arbitrary order analog fractance circuit is most appropriate, which reveals profoundly the relationships among neural circuit elements [911]. In that way, real neurodynamic systems should be addressed by fractional-order models. Fractional-order neurodynamic systems can better describe how action potentials in neurons are launched and spread. In addition, fractional-order neurodynamic systems possess infinite memory, and yet, integer-order neurodynamic systems are not of such feature [38, 1215]. Therefore, fractional-order neurodynamic systems have the potential to accomplish what integer-order ones can not do. More feasible analysis methods and easy-to-use techniques to be deal with fractional-order neurodynamic systems are worth looking into.

As a coherent behavior within nonlinear systems, synchronization of nonlinear systems has attracted phenomenal worldwide attention. Many studies have shown that synchronization mechanism is a universal phenomenon and has a wide range of applications in engineering systems. Generally, two schemes for synchronization are frequently used: inner-synchronization and outer-synchronization. For inner-synchronization, all nodes within a network will achieve a coherent behavior. However, for outer-synchronization, all individuals in two networks will achieve identical behaviors. In many application fields, outer-synchronization may seem practical [1623]. For example, in heuristic computational intelligence, it is known that outer-synchronization is rooted in brain-inspired computing from evolutionary strategies to cognitive tasks. Nevertheless, results focusing on outer-synchronization of complex control systems have seldom been reported [19]. Control strategy for outer-synchronization deserves more investigation.

Sampled-data control through only using the local information has recently generated significant research interest [2438]. Unlike continuous-time control, which requires the continuous communication data, sampled-data control is more appropriate under networked environment. For control systems, once we can give effective sampling policies and schedule, then the sampled-data control will reduce communication data and save energy dramatically. Thus, how to develop high-efficiency, heuristic information-based sampled-data control with the ultimate aim of maximizing the data collected is worth studying [38]. However, relevant studies of the data-sampling strategy for control systems are still in early stage.

Motivated by the above discussions, in this paper, we introduce the centralized and decentralized data-sampling principles to achieve outer-synchronization between coupled fractional-order neural networks. The efficient allocation of the limited energy resources of centralized and decentralized data-sampling principles that maximizes the information value of the data collected is clearly a step forward. Meanwhile, to more efficiently design the sampling method, we merge the structure and state clusters through centralized and decentralized data-sampling principles and then select the best sampling time. On the basis of some analytical tools of fractional differential equations, a series of criteria on outer-synchronization are derived. It should be noted that such criteria capture the information on sampling pattern and may have much wider application range.

The rest of the paper is organized as follows. In Section 2, we present the preliminaries and problem formulation. In Section 3, we state main results in detail. In Section 4, simulation example is illustrated. Finally, Section 5 concludes the paper.

2. Preliminaries and Problem Formulation

First, some preliminaries of fractional operator are given.

Fractional integral for with order is described aswhere is Gamma function and is the initial time.

Caputo fractional derivative for with order is described aswhere is Gamma function, is a positive integer, and is the initial time.

One-parameter Mittag-Leffler function is described aswhere is Gamma function, , and is a complex number.

Consider a class of fractional-order neural networkswhere , , and are piecewise continuous and bounded, and feedback function satisfiesin which .

For the centralized data-sampling principle, (4) is rewritten aswhere is simple notion of with and is uniform for all the system states. Every neuron intersperses its state to its out-neighbors and receives the state information from its in-neighbors at the same time point .

For the decentralized data-sampling principle, (4) is rewritten aswhere is simple notion of with and is distributed for . Each neuron pushes its state information to its out-neighbors at time when it updates its state. It receives the information of in-neighbor state at time when the neighbor neuron updates its state.

Now, we state definition and problem formulation.

Definition 1 (see [19]). For any two trajectories and of (4) starting from different initial values and , if there exists some control scheme such thatthen we call system (4) can achieve outer-synchronization, where denotes norm.

Let and be two trajectories of (6) starting from different initial values and . Defining , it follows thatwhere , for all

When we adopt the centralized data-sampling principle via structure to achieve outer-synchronization of (6), according to Definition 1, we need to design control strategy based on system structure of (9) such thatwhere denotes norm, .

When we adopt the centralized data-sampling principle via state to achieve outer-synchronization of (6), in this case, consider state measurement errorwhere According to Definition 1, we need to design control strategy based on state measurement error (11) such thatwhere denotes norm, .

Let and be two trajectories of (7) starting from different initial values and . Defining , it follows thatwhere , for all

When we adopt the decentralized data-sampling principle via state to achieve outer-synchronization of (7), in this case, consider state measurement errorwhere According to Definition 1, we need to design control strategy based on state measurement error (14) such thatwhere denotes norm, .

Next, we present relevant lemmas.

Lemma 2 (see [1]). Let . If , thenwhere

Lemma 3 (see [39]). Given , let be nonnegative and locally integrable on ; let be continuous, bounded, nonnegative, and nondecreasing on . Assuming to be nonnegative and locally integrable on withthenMoreover, if is nondecreasing on , thenwhere is Gamma function and is one-parameter Mittag-Leffler function.

In the following, we end this section with some notations that are needed later.

Let be positive constants, throughout this paper; denotewhere . For vector , vector norm . In addition, by the boundedness of and , there exist positive constants and such that

3. Main Results

For problem formulation in preceding section, in this section, we propose the corresponding control schemes for centralized data-sampling principle and decentralized data-sampling principle, respectively.

To facilitate the narrative, we first address the control designs, then review, and analyze the theoretical results.

3.1. Centralized Data-Sampling Principle

Theorem 4. Let and be positive constants with and . Assume that there exist positive constants such that for all and . Set as a time point such thatfor Then system (6) reaches outer-synchronization.

Proof. From for all and , together with (23), it follows thatthenfor all and any . According to (9), will not update untilat time point . Thus, we get , which impliesfor Thentherefore, the Zeno behavior can be excluded. Combining with (24) and (28),sofor By the definition of vector norm in this paper, from (9), now let us consider at time ,whereAccording to (5), obviously, for all , andNotice that ; then for any ,thusBy (32) and (36),which leads tohenceRecalling system (9), we havewhere is defined in (22). It can be concluded that outer-synchronization of system (6) is proved.

Remark 5. From inequality (28), we can seefor all , which excludes the Zeno behavior for rule (24).

Theorem 6. Let be a positive and continuous function on . Set as a time point such thatfor all , where is defined in (11). If there exist positive constants such that for some and all , , then system (6) reaches outer-synchronization.

Proof. According to Lemma 2, from (9) and (42),where is defined in (22), andOn the other hand, by (43),for , where .
Using Lemma 3, from (45), it followswhere , which implies that converges to by the sampling time sequence . Therefore, system (6) reaches out-synchronization.

3.2. Decentralized Data-Sampling Principle

Theorem 7. Let be positive and continuous on . Set as a time point such thatfor and all , where is defined in (14). If there exist positive constants such that for some and all , and , then system (7) reaches outer-synchronization.

Proof. According to Lemma 2, from (13) and (47),where is defined in (22), andOn the other hand, by (48),for , , where .
Using Lemma 3, from (50), it followswhere , which implies that converges to by the sampling time sequence , . Therefore, system (7) reaches out-synchronization.

Remark 8. As Theorem   in [19], under the data-sampling rule in Theorem 6 or Theorem 7, the interevent interval of each system state is strictly positive and possesses a common positive lower bound. Furthermore, the Zeno behavior is excluded.

Remark 9. For the sampled-data control, how to choose the proper scheme with the ultimate aim of maximizing the data collected to control the system is challenging. For example, as revealed in [9, 10], it is extremely difficult to design the sampling time point inherited from the sampled-data control strategy. However, according to Theorems 47, this situation can be effectively solved if the centralized and decentralized data-sampling principles are cleverly utilized.

Remark 10. For three control schemes in Theorems 47, these are just the type and level of points, not the merits of good points of difference. Theorem 4 is entirely focused around the centralized data-sampling principle via structure. Theorem 6 is concerned with the centralized data-sampling principle via state. Theorem 7 is to place emphasis on the decentralized data-sampling principle via state.

Remark 11. Note that the sampled-data control in Theorems 47 exerts only at the sampling time point, that is, every system state employs only its neighbors’ information at or . Thus, compared with the continuous-time control strategy, the control schemes in Theorems 47 can effectively save the bandwidth and reduce the communication cost. Moreover, the results obtained here are the first ones on centralized and decentralized data-sampling principles for outer-synchronization of fractional-order neural networks.

Remark 12. The key features of outer-synchronization in Theorems 47 are follows. () Each outer-synchronization scheme is closely related to the sampling time point. Once the sampling time point is given, the states of the controlled fractional-order neural networks will achieve outer-synchronization. () Centralized data-sampling principle via structure makes full use of the characteristic of system itself, while centralized or decentralized data-sampling principle via state skillfully combines the feature of state measurement error.

Remark 13. The analytical methods for outer-synchronization in Theorems 47 are quite different from conventional complete synchronization, projective synchronization, phase synchronization, distributed synchronization, pinning synchronization, and cluster synchronization.

4. A Numerical Example

In this section, a numerical example is utilized to show the effectiveness of the results obtained.

Consider a class of fractional-order neural networks as follows:where , , , , , .

By direct calculation, we can obtain

To choose , , , then it follows that , . Hence the following inequalities hold:According to Theorem 4, system (52) reaches outer-synchronization. Figures 1 and 2 depict the dynamics of and , and in the triggering time points as Theorem 4, respectively. Figure 3 describes the release time points and release intervals.

Figure 1: Dynamics of and in the triggering mechanism as Theorem 4.
Figure 2: Dynamics of and in the triggering mechanism as Theorem 4.
Figure 3: The release time points and release intervals in the triggering mechanism as Theorem 4.

To select , together withaccording to Theorem 6, system (52) reaches outer-synchronization. Figures 4 and 5 depict the dynamics of and , and in the triggering time points as Theorem 6, respectively. Figure 6 describes the release time points and release intervals.

Figure 4: Dynamics of and in the triggering mechanism as Theorem 6.
Figure 5: Dynamics of and in the triggering mechanism as Theorem 6.
Figure 6: The release time points and release intervals in the triggering mechanism as Theorem 6.

To select , together withaccording to Theorem 7, system (52) reaches outer-synchronization. Figures 7 and 8 depict the dynamics of and , and in the triggering time points as Theorem 7, respectively. Figure 9 describes the release time points and release intervals.

Figure 7: Dynamics of and in the triggering mechanism as Theorem 7.
Figure 8: Dynamics of and in the triggering mechanism as Theorem 7.
Figure 9: The release time points and release intervals in the triggering mechanism as Theorem 7.

Remark 14. In existing publications, there has been no theoretic criterion to achieve outer-synchronization of (52). In addition, using centralized or decentralized data-sampling principle to analyze and control fractional-order systems is also rare.

Remark 15. According to simulation analysis in Figures 19, there is no essential difference regarding outer-synchronization performance in three control schemes as Theorems 47. By comparative analysis of Figures 3, 6, and 9, the release intervals via control scheme as Theorem 4 are relatively minor, and the triggering time points via control scheme as Theorem 7 are spread more thinly.

5. Concluding Remarks

In this paper, we show that outer-synchronization of fractional-order neural networks can be achieved by applying appropriate centralized and decentralized data-sampling principles. Such theoretical results improve and supplement some existing related results. The results obtained here are sufficient conditions for outer-synchronization of fractional-order neural networks and may remain room for improvement. Further extensions would be welcome: () outer-synchronization of fractional-order neural networks considering both conservativeness and complexity; () analyzing the outer-synchronization of fractional-order neural networks subject to time-delay; () analyzing the outer-synchronization of fractional-order neural networks subject to stochastic disturbance.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work is supported by the Research Project of Hubei Provincial Department of Education of China under Grant T201412.

References

  1. B. Chen and J. Chen, “Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks,” Neural Networks, vol. 68, pp. 78–88, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. I. Pan and S. Das, “Fractional order AGC for distributed energy resources using robust optimization,” IEEE Transactions on Smart Grid, vol. 7, no. 5, pp. 2175–2186, 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Shen and J. Lam, “Stability and performance analysis for positive fractional-order systems with time-varying delays,” IEEE Transactions on Automatic Control, vol. 61, no. 9, pp. 2676–2681, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  4. L. P. Chen, R. C. Wu, J. Cao, and J.-B. Liu, “Stability and synchronization of memristor-based fractional-order delayed neural networks,” Neural Networks, vol. 71, pp. 37–44, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. C. Huang, J. Cao, M. Xiao, A. Alsaedi, and T. Hayat, “Bifurcations in a delayed fractional complex-valued neural network,” Applied Mathematics and Computation, vol. 292, pp. 210–227, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  6. S. Liang, R. Wu, and L. Chen, “Comparison principles and stability of nonlinear fractional-order cellular neural networks with multiple time delays,” Neurocomputing, vol. 168, pp. 618–625, 2015. View at Publisher · View at Google Scholar · View at Scopus
  7. M. B. Delghavi, S. Shoja-Majidabad, and A. Yazdani, “Fractional-order sliding-mode control of islanded distributed energy resource systems,” IEEE Transactions on Sustainable Energy, vol. 7, no. 4, pp. 1482–1491, 2016. View at Publisher · View at Google Scholar
  8. R. Rakkiyappan, G. Velmurugan, and J. Cao, “Stability analysis of fractional-order complex-valued neural networks with time delays,” Chaos, Solitons and Fractals, vol. 78, pp. 297–316, 2015. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  9. A. Wu, L. Liu, T. Huang, and Z. Zeng, “Mittag-Leffler stability of fractional-order neural networks in the presence of generalized piecewise constant arguments,” Neural Networks, vol. 85, pp. 118–127, 2017. View at Publisher · View at Google Scholar
  10. A. Wu and Z. Zeng, “Boundedness, Mittag-Leffler stability and asymptotical ω-periodicity of fractional-order fuzzy neural networks,” Neural Networks, vol. 74, pp. 73–84, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. A. L. Wu and Z. G. Zeng, “Global Mittag-Leffler stabilization of fractional-order memristive neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 1, pp. 206–217, 2017. View at Google Scholar
  12. X. Yang, C. Li, Q. Song, T. Huang, and X. Chen, “Mittag-Leffler stability analysis on variable-time impulsive fractional-order neural networks,” Neurocomputing, vol. 207, pp. 276–286, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. N. Ullah, M. Asghar Ali, R. Ahmad, and A. Khattak, “Fractional order control of static series synchronous compensator with parametric uncertainty,” IET Generation, Transmission & Distribution, vol. 11, no. 1, pp. 289–302, 2017. View at Publisher · View at Google Scholar
  14. S. Zhang, Y. G. Yu, and H. Wang, “Mittag-Leffler stability of fractional-order Hopfield neural networks,” Nonlinear Analysis: Hybrid Systems, vol. 16, pp. 104–121, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. A. Azami, S. V. Naghavi, R. Dadkhah Tehrani, M. H. Khooban, and F. Shabaninia, “State estimation strategy for fractional order systems with noises and multiple time delayed measurements,” IET Science, Measurement & Technology, vol. 11, no. 1, pp. 9–17, 2017. View at Publisher · View at Google Scholar
  16. T. Jing, F. Chen, and Q. Li, “Finite-time mixed outer synchronization of complex networks with time-varying delay and unknown parameters,” Applied Mathematical Modelling, vol. 39, no. 23-24, pp. 7734–7743, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. S. Li, “Linear generalized outer synchronization between two complex dynamical networks with time-varying coupling delay,” Optik, vol. 127, no. 22, pp. 10467–10477, 2016. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Lu, C. Ding, J. Lou, and J. Cao, “Outer synchronization of partially coupled dynamical networks via pinning impulsive controllers,” Journal of the Franklin Institute. Engineering and Applied Mathematics, vol. 352, no. 11, pp. 5024–5041, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. W. Lu, R. Zheng, and T. Chen, “Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling,” Neural Networks, vol. 75, pp. 22–31, 2016. View at Publisher · View at Google Scholar · View at Scopus
  20. W. G. Sun, Y. Q. Wu, J. Y. Zhang, and S. Qin, “Inner and outer synchronization between two coupled networks with interactions,” Journal of the Franklin Institute, vol. 352, no. 8, pp. 3166–3177, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. Y. Sun, W. Li, and J. Ruan, “Generalized outer synchronization between complex dynamical networks with time delay and noise perturbation,” Communications in Nonlinear Science and Numerical Simulation, vol. 18, no. 4, pp. 989–998, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  22. Z. Wu, G. Chen, and X. Fu, “Outer synchronization of drive-response dynamical networks via adaptive impulsive pinning control,” Journal of the Franklin Institute, vol. 352, no. 10, pp. 4297–4308, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. Y. Yang, Y. Wang, and T. Li, “Outer synchronization of fractional-order complex dynamical networks,” Optik, vol. 127, no. 19, pp. 7395–7407, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. W.-H. Chen, Z. Wang, and X. Lu, “On sampled-data control for master-slave synchronization of chaotic Lur'e systems,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 59, no. 8, pp. 515–519, 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. W.-H. Chen and W. X. Zheng, “An improved stabilization method for sampled-data control systems with control packet loss,” IEEE Transactions on Automatic Control, vol. 57, no. 9, pp. 2378–2384, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. D. Ding, Z. Wang, G. Wei, and F. E. Alsaadi, “Event-based security control for discrete-time stochastic systems,” IET Control Theory & Applications, vol. 10, no. 15, pp. 1808–1815, 2016. View at Publisher · View at Google Scholar
  27. H. Li, X. Liao, T. Huang, and W. Zhu, “Event-triggering sampling based leader-following consensus in second-order multi-agent systems,” IEEE Transactions on Automatic Control, vol. 60, no. 7, pp. 1998–2003, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. D. Wang, D. R. Liu, Q. C. Zhang, and D. B. Zhao, “Data-based adaptive critic designs for nonlinear robust optimal control with uncertain dynamics,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 11, pp. 1544–1555, 2016. View at Publisher · View at Google Scholar
  29. J. Wang, X.-M. Zhang, and Q.-L. Han, “Event-triggered generalized dissipativity filtering for neural networks with time-varying delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, pp. 77–88, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. L. C. Wang, Z. D. Wang, T. W. Huang, and G. L. Wei, “An event-triggered approach to state estimation for a class of complex networks with mixed time delays and nonlinearities,” IEEE Transactions on Cybernetics, vol. 46, no. 11, pp. 2497–2508, 2016. View at Publisher · View at Google Scholar · View at Scopus
  31. Z. Wang and D. Liu, “A data-based state feedback control method for a class of nonlinear systems,” IEEE Transactions on Industrial Informatics, vol. 9, no. 4, pp. 2284–2292, 2013. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Wen, T. Huang, X. Yu, M. Z. Chen, and Z. Zeng, “Aperiodic sampled-data sliding-mode control of fuzzy systems with communication delays via the event-triggered method,” IEEE Transactions on Fuzzy Systems, vol. 24, no. 5, pp. 1048–1057, 2016. View at Publisher · View at Google Scholar
  33. H.-Q. Xiao, Y. He, M. Wu, S.-P. Xiao, and J. She, “New results on H∞ tracking control based on the T-S fuzzy model for sampled-data networked control system,” IEEE Transactions on Fuzzy Systems, vol. 23, no. 6, pp. 2439–2448, 2015. View at Publisher · View at Google Scholar · View at Scopus
  34. C.-K. Zhang, Y. He, and M. Wu, “Improved global asymptotical synchronization of chaotic lur'e systems with sampled-data control,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 56, no. 4, pp. 320–324, 2009. View at Publisher · View at Google Scholar · View at Scopus
  35. H. Zhang, J. Liu, D. Ma, and Z. Wang, “Data-core-based fuzzy min-max neural network for pattern classification,” IEEE Transactions on Neural Networks, vol. 22, no. 12, pp. 2339–2352, 2011. View at Publisher · View at Google Scholar · View at Scopus
  36. X. X. Yin, D. Yue, S. L. Hu, C. Peng, and Y. S. Xue, “Model-based event-triggered predictive control for networked systems with data dropout,” SIAM Journal on Control and Optimization, vol. 54, no. 2, pp. 567–586, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. X.-M. Zhang and Q.-L. Han, “Event-triggered dynamic output feedback control for networked control systems,” IET Control Theory & Applications, vol. 8, no. 4, pp. 226–234, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  38. R. Zheng, X. Yi, W. Lu, and T. Chen, “Stability of analytic neural networks with event-triggered synaptic feedbacks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 2, pp. 483–494, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  39. H. Ye, J. Gao, and Y. Ding, “A generalized Gronwall inequality and its application to a fractional differential equation,” Journal of Mathematical Analysis and Applications, vol. 328, no. 2, pp. 1075–1081, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus