In the special issue on computational intelligence (CI) and complex systems a plethora of approaches is presented on how to apply computation intelligence in modeling, control, decision support, and optimization of complex problems. The three main components of CI are fuzzy systems, evolutionary and population based algorithms, and artificial neural networks. In addition, there are other related techniques, among others, chaos theory and subjective probability. Often, the combinations and hybrids of these methods, sometimes complemented by classic mathematical or modeling tools, turn out to be most efficient in the solution of real life problems. In the series of 26 papers, illustrations for almost all of these methods may be found.

What is the main goal and target in these research problems? Several years ago the Lead Guest Editor published a study on how to use fuzzy rule based systems as tools in solving what was called (maybe, in a somewhat exaggerating way) the “Key Problem of Engineering,” even though “Key Problem of Engineering” is not a term accepted by consensus in the relevant literature. Moreover, the term Engineering has many different interpretations, the narrowest one referring to technological sciences and disciplines only, while in various broader interpretations it includes computer science, agricultural engineering, social engineering, and certain aspects of economic models. In the present special issue, Engineering may be replaced by the concepts “Applied Problems,” “Real Life Applications,” or similar.

The essential point is that problems arising in various real life contexts have a number of common features, namely, their high complexity and the requirement of solving the problem with “reasonable quality” from the application side. The behavior of this kind of models was addressed in a form reduced to optimizing fuzzy rule based models from the point of view of practical applicability, which was called “Fuzzy Cat and Mouse Problem”: a theoretical experiment where the “fuzzy cat” was the implementation of a simple CI approach (the model and algorithm representing a Mamdani-type fuzzy system). In this example, the problem to be solved is to catch a mouse following a very simple and mechanical algorithm:(1)The cat takes a “picture” of the position and dynamic features of the mouse.(2)The rule base in the “head of the cat” is calculating the likely future position of the mouse at the time of the cat grasping it. (Because of the uncertainty involved, this future position may be given only by an estimated area.)(3)The cat searches the estimated future position area of the mouse systematically until it catches the mouse.

Essentially, two time periods will add together when the sum time of catching the mouse is calculated:(i)the time of running the Mamdani-system for the determination of the area containing the future position of the mouse,(ii)the time searching that area (exhaustively).

This sum time should be minimized. The dilemma is the following: if the model is very refined and precise, its evaluation, i.e., the calculation of the expected future position of the mouse, takes a longer time, and thus, the uncertainty of the mouse’s position becomes bigger, and so the area to be searched becomes larger. However, within the limits of this uncertainty, the estimated future position (the “area center”) will be calculated more precisely, and thus, the resulting uncertainty of the future position area, which is identical to the area to be searched in step 3 of the algorithm, will be determined by the delicate balance between the fineness and precision of the prediction model and the speed of calculating the result, as obviously, the smaller the area to be searched, the faster the prediction model produces a result.

All other complex problems are similar in the sense that the balance of resource intensity and solution quality must satisfy a preliminarily fixed goal function (where parameters may set the relative importance of the two goal components). The entirety of CI methods, along with hybrid approaches, where CI is combined with traditional mathematical and statistical methods, and computer science techniques, form a nice toolkit from where such solutions satisfy the user or the presenter of a problem to a high degree.

It is not easy to tell anything new about the concept of complexity in the Complexity Journal. A long series of articles having been published in the past present a colorful and rich pool of complex problems with various approaches to the definition of complexity. They agree that the high need of resources (especially time and space) makes them intractable in the mathematical sense (usually they fall in the NP-hard class), but they often involve nondeterministic behavior and all types of vagueness and uncertainty. The reader will judge if all the articles in the special issue conform to this definition.

Next, a very brief description of all the papers will follow. We arranged the papers according to the main CI approach applied, wherever it was possible, while the hybrid approaches were listed between the two respective main approaches combined. Finally, the few “outliers,” the articles on using less frequently applied CI techniques, were listed at the end of the issue.

In this collection of studies, the most frequently appearing methodological approach is unquestionably the wide and still continuously increasing group of evolutionary and population based algorithms, which are applied for optimization and search. To use a pun, there is a permanent evolution of the evolutionary algorithms going on, and in some applications even the more classic approaches themselves produce sometimes surprisingly good results. It is obvious that highly complex (NP-hard, exponential complexity) problems cannot be solved generally by any exact mathematical method. This is where the random element inherently occurring in the evolutionary approaches gets its importance: both in the mutation type and in the gene/chromosome transfer operations there is a random component which allows global movements in the optimization space. Although in the case of the evolutionary optimization algorithms, no guarantee of an exact or, even, almost exact optimization may be given, often a rather good solution is achieved by these meta-heuristic techniques, and simulation based evidence supports the expectations of these being always well performing techniques. Next the papers classified into this category will be listed.

X. Yin et al. propose in the paper Improved Hybrid Fireworks Algorithm-Based Parameter Optimization in High-Order Sliding Mode Control of Hypersonic Vehicles a relatively novel population based technique, the Fireworks Algorithm (FWA), with additional hybrid elements for the control of a highly nonlinear problem with uncertainty elements: the high order sliding mode control of hypersonic vehicles. The FWA is combined with some elements of the genetic algorithm, and the hybrid approach applied here shows close relationship with the philosophy of the Bacterial Evolutionary Algorithm where better and worse solutions are always kept and combined for ensuring higher diversity. The main challenge here is to achieve a satisfactory tracking control performance under uncertain elements in the system and in the environment. In the study, first, the complex relation between design parameters and the cost function that evaluates the likelihood of system instability and violation of design requirements is modeled via stochastic robustness analysis. The proposed method is applied for parameter optimization and with the parameters thus obtained the efficiency of the proposed hybrid FWA-based optimization method is demonstrated in the search of the optimal HV controller, in which the proposed method exhibits a better performance when compared with other algorithms.

An interesting approach to solve the problem of finding suitable parameters of complex models that satisfy specific requirements is presented in the paper by B. G. Kang et al., being entitled Simulation-Based Optimization on the System-of-Systems Model via Model Transformation and Genetic Algorithm: A Case Study of Network-Centric Warfare. The authors propose a simulation-based optimization approach for modeling and make use of the System of Systems (SoS) model. The approach addresses two difficult yet important aspects: the long simulation times and a need for extensive simulation-based analysis of different scenarios. The authors attempt to solve these problems via transforming an SoS model into a neural network and further applying genetic algorithms for finding optimal values of the model parameters.

Location inventory problems are addressed in the next two works coauthored by the same group of researchers. The first work, An Improved Differential Evolution Algorithm for a Multicommodity Location-Inventory Problem with False Failure Returns, by C. Li et al. focuses on a multicommodity location-inventory problem. The authors propose a mixed-integer nonlinear programming-based model for studying a forward-reverse logistics network. The model allows for minimizing a total cost of false failure returns. It is solved using a proposed new population type heuristic methodology.

A different although related task of minimizing a total cost in the case of location-inventory-routing problem is considered in A Nonlinear Integer Programming Model for Integrated Location, Inventory, and Routing Decisions in a Closed-Loop Supply Chain. H. Guo et al. formulate the problem as a nonlinear integer programming model. They use it for optimization of such issues as facility location, inventory control, and vehicle routing. Further, they solve the model using an algorithm that combines simulated annealing with adaptive genetic algorithm.

Processes of decision-making in industrial settings are covered in the paper entitled A Binary Cuckoo Search Big Data Algorithm Applied to Large-Scale Crew Scheduling Problems by J. Garcia et al.. The authors apply meta-heuristic techniques to a scenario involving big data and the Internet of Things. In particular, they propose a Cuckoo Search Binary algorithm that uses the map-reduce programming paradigm of the Apache Spark tool. They apply it to a crew scheduling problem. Such aspects as convergence times and conditions for obtaining acceptable results are investigated. The form of decision making as a selection of project portfolio may be especially critical in the context of software industry.

J. Xiao et al., in An Improved MOEA/D Based on Reference Distance for Software Project Portfolio Optimization, treat multiobjective evolutionary algorithms as an optimization tool to solve selection tasks. They obtain a Pareto-optimal front as the solution. The paper introduces and describes an improved version of the classic multiobjective algorithm. The modification means incorporating a reference distance. The algorithm is used with two, three, and four objectives. An extensive investigation and comparison with existing multiobjective algorithms are provided.

J. Lee et al. present the paper Effective Evolutionary Multilabel Feature Selection under a Budget Constraint, which improves conventional methods that frequently violate budget constraints or result in inefficient searches due to ineffective exploration of some important features. The proposed method employs a novel exploration operation to enhance the search capabilities of a traditional genetic search, resulting in improved multilabel classification. Moreover, an empirical study is introduced, which is based on 20 real-world datasets showing evidence for the advantageous features of the proposed method.

The paper Computational Analysis of Complex Population Dynamical Model with Arbitrary Order, written by F. Haq et al., considers the approximation of the solution of a fractional order biological population model. From the Laplace Adomian decomposition method (LADM), which uses the Caputo sense for the fractional derivative, the authors construct a base function and provide a deformation equation of higher order in a simple equation. The considered scheme provides a solution in the form of rapidly convergent infinite series. The paper also includes some examples to check the efficiency of the method. As a consequence, the authors show that LADM is an efficient and accurate method for solving such type of nonlinear problems.

G. Cabrera-Guerrero et al. present a study entitled Parameter Tuning for Local-Search-Based Matheuristic Methods, which focuses on the parameter that determines the size of the subproblem that is generated by the heuristic method and is solved by the exact method. They show how meta-heuristic performance varies as this parameter is modified. This paper consideres a well-known NP-hard combinatorial optimisation problem, namely, the capacitated facility location problem as the experimental basis. Based on the obtained results, they discuss the effects of adjusting the size of subproblems that are generated when using meta-heuristics methods, such as the one considered in the paper. The paper also aims at studying the impact of parameter tuning on the performance of matheuristic methods.

The paper by Z. W. Geem et al. entitled Improved Optimization for Wastewater Treatment and Reuse System using Computational Intelligence proposes an optimization method applying computational intelligence for reuse system. River water pollution by wastewater can cause a significant negative impact on the aquatic sustainability. Hence, accurate modeling of this complicated system and its cost-effective treatment and the reuse decision is very important, because this optimization process is related to economic expenditure, societal health, and environmental deterioration. In order to optimize this complex system, three treatment or reuse options are considered, namely, micro-screening filtration, nitrification, and fertilization-oriented irrigation, on top of two already existing options, namely, settling and biological oxidation. The objective of the environmental optimization is to minimize the economic expenditure of life cycle costs while satisfying the public health standards in terms of groundwater quality and the environmental standards in terms of river water quality. The study improves the existing optimization model by pinpointing the critical deficit, pointing at the dissolved oxygen sag curve by using analytic differentiation. The proposed formulation considers more practical constraints such as maximal size of the irrigation area and the minimal amount of the filtration treatment process. The results are obtained by using an evolutionary type algorithm, named parameter-setting-free harmony search algorithm, showing that the proposed model finds optimal solutions successfully, while eliminating the critical deficit point int he previous approaches.

The paper by X. Lv et al. entitled An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO proposes an improved test selection model for optimization. Sensor data-based test selection optimization is the basis for designing test procedures so that they ensure that the system is tested under the constraint of the conventional indices, such as fault detection rate (FDR) and fault isolation rate (FIR). From the perspective of the equipment maintenance support, the ambiguity isolation has a significant effect on the result of the test selection. In this study, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adapted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete Particle Swarm Optimization (PSO) algorithm is applied for solving the improved test selection optimization model. The new model is more consistent with complicated real life engineering systems and the experimental results verify the effectiveness of the proposed method.

The next article authored by L. Liu et al. is entitled Legendre Cooperative PSO Strategies for Trajectory Optimization and it introduces some novel strategies for trajectory optimization. PSO is a population based stochastic optimization technique which performs well in a smooth search space. However, in the case of the trajectory optimization problem with arbitrary final time and multiple control variables, the smoothness of variables cannot be satisfied, since here linear interpolation is widely used. In the paper, a novel Legendre Cooperative PSO (LCPSO) is proposed by prposing the use of Legendre orthogonal polynomials instead of linear interpolation. An additional control variable is introduced which transcribes the original optimal problem with arbitrary final time to a fixed one. Then, a practical fast one-dimensional interval search algorithm is designed to optimize this additional control variable. In order to improve the convergence and prevent explosion of the LCPSO, a theorem on how to determine the boundaries of the coefficient of polynomials is proved. Finally, in the numerical simulations, compared with the ordinary PSO and other more classical population based optimization algorithms, namely, GA and DE, it is evidenced that the proposed LCPSO has lower dimensionality, faster speed of convergence, and higher accuracy, while providing smoother control variables.

The paper by A. O. Belousov and T. R. Gazizov entitled Systematic Approach to Optimization for Protection against Intentional Ultrashort Pulses Based on Multiconductor Modal Filters proposes a new approach of optimization for protection against intentional ultrashort pulses. The problem of protecting radio electronic equipment from ultrashort pulses is of utmost importance nowadays since conductive interference poses the biggest danger to its proper functioning. The article considers the issue of protecting equipment by means of modal filters (MF) and analyzes the structures of multiconductor microstrip MFs. They present the results of a complex study of the possibility to conduct the optimization (both separate and simultaneous) of a multiconductor MF by different criteria and the formulation of the basic (electrical) optimization criteria for MF. They formulate the amplitude and time criteria for optimizing an MF with an arbitrary number of conductors in an analytical form and thus obtain a general multicriteria objective function for optimizing an MF by different criteria. As a result, they formulate a hybrid model consisting of heuristic search and GA.

The study by Z. Nasar and S. W. Jaffry entitled Trust-Based Situation Awareness: Comparative Analysis of Agent-Based and Population-Based Modeling provides an insight in the field of trust based communication in the multiagent simulation modeling. In the paper, the authors present a comparison study on two popularly used approaches of modeling multiagent systems under situations with homogenous and heterogeneous populations. The two approaches used in the study are the Agent Based Modeling (ABM) and Population Based Modeling (PBM) methods. The paper also provides results on showing the sensitivity of the trust analysis, especially in the case of heterogeneous systems.

The next two papers represent somewhat different approaches, as actual biological and biochemical systems are investigated by evolutionary-population type methods. In the second of the two, a fuzzy modeling method is also used, so it may be considered as the bridging article towards the next type of CI methods.

In the study Computational Analysis of Complex Population Dynamical Model with Arbitrary Order, F. Haq et al. tackle the solution of highly nonlinear problems, by applying a population dynamical model, where, however, some classic mathematical tools, such as the Laplace Adomian decomposition method (LADM), are applied. The importance of fractional differential equations in the modeling of high complexity systems is without doubt, and by applying traditional mathematical approaches, such equations may not be solved at all, or if yes, in a very time consuming way. Numerical calculations may be also very resource demanding, and here, the accuracy of the solution is questionable. The authors briefly survey the earlier attempts for solving this class of problem and then introduce the necessary mathematical tool box for being able to apply Laplace transform on fractional differential equations, such as the Caputo-derivative and the Riemann-Liouville fractional order integral operator. The applicability of the proposed method is demonstrated on a fractional order biological model. They show that their approach is more efficient, especially, converging better than previous approaches. Even though this method may be considered as a traditional technique, however, its biological application makes it interesting for this special issue and points towards further open questions in the field of hybrid CI models and algorithms, which may be answered in the future by combining this novel method with population based or other optimization approaches, so we decided to include the paper among the selected ones.

Biological systems are one of the most complex ones. Therefore, an application of computational intelligence techniques for their modeling in order to better understand their behavior and interrelations may be interesting and promising. An example for that is presented in the paper Simulations of Higher-Order Protein Organizations Using a Fuzzy Framework by B. Tüű-Szabó et al.. The authors address a challenging task of gaining insight into organizational principles of higher-order structures, such as proteins and nucleic acids. They propose a fuzzy mathematical framework to model and investigate interaction patterns and the multiplicity of conformational states in protein assemblies. Analysis of the model allows the authors to draw some novel conclusions about associations of biological polymers, and polymer formation in particular.

Having entered the fuzzy systems area, the next papers will be discussed, which all apply fuzzy models, although, in some cases, special, extended fuzzy concepts.

P. Baranyi discusses the need for certain advantageous features, such as rank/complexity reduction, trade-offs between complexity and accuracy, and a manipulation power representative of the Tensor Product (TP) form in the article Extension of the Multi-TP Model Transformation to Functions with Different Numbers of Variables. In this regard, the paper presents a novel TP model transformation based concept in the Takagi-Sugeno (TS) type fuzzy modeling and control. The latest extensions of the TP model transformation, called the multi- and generalized TP model transformations, are applicable to a set of functions where the dimensionality of the outputs of the functions may differ; however, there is a strict limitation on the dimensionality of their respective inputs, and namely, they must have the same value. This paper proposes a new, extended version of the TP model transformation that is applicable to a set of functions where both the input and output dimensionalities of the functions may differ. This makes it possible to transform complete multicomponent systems to TS fuzzy models along with the above-mentioned advantages in the complexity reduction and manipulation of the model representation.

J. Wang et al. study the topic of hesitant fuzzy systems in the paper Some Hesitant Fuzzy Linguistic Muirhead Means with Their Application to Multiattribute Group Decision-Making. Specifically, they extend the Muirhead mean (MM), which is a useful aggregation technology that is able to consider the interrelationship among all aggregated arguments, to a hesitant fuzzy linguistic environment. As a consequence, several new hesitant fuzzy linguistic aggregation operators may be introduced. These operators reflect the correlations among all the hesitant fuzzy linguistic elements. Furthermore, they propose a novel approach to multiattribute group decision-making (MAGDM) in a hesitant fuzzy linguistic context, based on the proposed operators. The paper concludes with a numerical experiment to demonstrate the validity of the proposed methodology and a comparison with other related methods.

In the paper by F. Lilik et al. entitled Improved Method for Predicting the Performance of the Physical Links in Telecommunications Access Networks, the authors propose a novel method of combining fuzzy inference and fuzzy rule interpolation techniques with wavelet transform, in order to predict the data transmission rate of a telecommunication network. Given the importance of such networks used for the transmission of not only voice but also data, the ability of accurate modeling and simulation in order to predict the quality of the network is very important. The analysis of noise and signal loss function is an important area to investigate. The authors provide results on applying wavelet analysis and use fuzzy rule inference by fuzzy rule interpolation for generating verbal rules. The method offers a robust prediction model with essential reduction of the necessary amount of measurements, compared with existing best practice.

A combination of the fuzzy and the neural network approach is applied in the next study.

S. Sotirov et al. present an intuitionistic fuzzy sets based method in A Hybrid Approach for Modular Neural Network Design using Intercriteria Analysis and Intuitionistic Fuzzy Logic, in order to remove a part of the inputs and, hence, the neurons. This method also leads to the decrease of the error between the desired goal value and the real value obtained on the output of a Modular Neural Network (MNN). This type of neural networks combines several simple neural models for reducing the complexity of a solution of a complex problem, which can be used, e.g., for object recognition and identification. Usually, the inputs of the MNN can be fed with independent data. However, there are certain limits in the use of MNN, and the number of the neurons is one of the major parameters during the implementation of it. The proposed method allows increasing the velocity in the learning process, a much desired effect. Furthermore, this method based on intercriteria analysis and intuitionistic fuzzy logic can also be used with good results for assessing the independence of data.

In the article The Intuitionistic Fuzzy Linguistic Cosine Similarity Measure and Its Application in Pattern Recognition by D. Liu et al., the authors propose the use of cosine similarity measures to enhance the precision of the intuitionistic fuzzy linguistic sets (IFLS) and interval-valued intuitionistic fuzzy linguistic sets (IVIFLSs) in a modeling problem. IFLSs were introduced in order to handle applications where the decision needs to be performed under (partially) nondeterministic conditions in the linguistic evaluation and where it is difficult to handle the problem by using only membership degrees of linguistic terms. The authors’ claim is that, at the time of the publication, there has been no work yet done on the study of using cosine similarity measures on IFLSs. The paper thus provides studies to fill this gap.

In the third block the papers applying neural network (NN) techniques are briefly introduced, although the first one shows certain characteristics of evolutionary approaches as well.

A. B. Csapo proposes a neural network-inspired formulation of the Spiral Discovery Method (SDN) in the work The Spiral Discovery Network as an Automated General-Purpose Optimization Tool, a cognitive artifact that was originally designed for user-guided interactive search in high-dimensional nonlinear parameter spaces. The SDN proposed in this paper extends that model by complementing its autoregressive update profile with a periodically activated hyperparameter update function, which serves for modeling, and replacing, the interventions that were previously carried out by the user. The SDN can be interpreted as an adaptive search algorithm that shows commonalities with supervised neural networks, evolutionary methods, and other optimization methods, but is also different in the sense that it does not directly rely on gradient-based feedback or on biologically inspired concepts such as genes or populations. The applicability of the approach is demonstrated through an example search space that is characterized by misleading gradient information and a narrow global minimum.

The next papers are purely NN approaches. An NN often allows obtaining relevant dynamic information of unknown nonlinear systems. In this framework, E. Irigoyen et al. present the paper About Extracting Dynamic Information of Unknown Complex Systems by Neural Networks. Based on the assumption that the dynamic behavior is excessively challenging to obtain an accurate mathematical model, considering Multilayer Perceptron (MLP) they present a system representation using a model formulated with state variables, which can be exported to an NN structure. The equilibrium states are studied by calculating the Jacobian matrix of the system through the NN model and different examples are analyzed.

In the article by M. I. Dieste-Velasco et al. entitled Regression and ANN Models for Electronic Circuit Design, the authors investigate the use of NNs for modeling the design of electronic circuits. The paper provides a comparison with the commonly used approach that is a simple regression analysis in order to construct the model. The analysis of electronic circuits with various input parameters can be rather complex. The paper demonstrates that a straightforward NN can provide a good analytic model by helping understand the behavior of current and voltage in electronic circuits.

Eventually, a nowadays rare CI approach, a chaos based model is proposed in the next work.

In Experimental Verification of Optimized Multiscroll Chaotic Oscillators Based on Irregular Saturated Functions, J. M. Muñoz-Pacheco et al. discuss an approach where they apply multiscroll chaotic attractors generated by irregular saturated nonlinear functions. These functions are designed in an irregular way by modifying their parameters, such as slopes, delays between slopes, and breakpoints, and then the positive Lyapunov exponent (LE) is optimized using the differential evolution algorithm to obtain chaotic attractors with 2 to 5 scrolls. The resulting chaotic attractors present more complex dynamics when different patterns of irregular saturated nonlinear functions are considered. The optimized chaotic oscillators have been physically implemented with help of an analog discrete circuit to validate the use of proposed irregular saturated functions. Experimental results achieved are consistent with two different types of simulators. The authors compare the results with other approaches using various evolutionary optimization algorithms and conclude that the new chaos-based technique produces better LE values than the other two used as reference.

Finally, a method applying subjective probabilistic approach closes the series of studies on CI applications for complex problems.

Food is an important resource all over the world, especially in areas with growing population. In the paper by Y. E. Shao and J.-T. Dai on Integrated Feature Selection of ARIMA with Computational Intelligence Approaches for Food Crop Price Prediction, the authors investigate the ways of using computational intelligence to provide an accurate food crop price prediction. In the paper, they provide investigation of using single models and integrated models for predicting food crop prices such as rice, wheat, and corn. In the integrated models, the authors have used the ARIMA as a feature selection method.

We hope that the Reader will enjoy this selected collection of papers, and the novel scientific ideas in them will be the starting points of new research, and will, maybe, trigger entirely new ideas, and thus will lead to more efficient solutions of highly complex problems in both basic and applied fields.

Conflicts of Interest

The editors declare that they have no conflicts of interest regarding the publication of this special issue.

Laszlo T. Koczy
Jesus Medina
Marek Reformat
Kok Wai Wong
Jin Hee Yoon